title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
18 values
text
stringlengths
0
8.42M
Japanese health and safety information for overseas visitors: protocol for a randomized controlled trial
d3bf9af5-e11d-4d7c-bcb3-3830e0aa6a6f
7981386
Health Communication[mh]
The number of overseas visitors to Japan has steadily increased over the last decade from 8.6 million in 2010 to 31.8 million in 2019 . Notwithstanding the disruption to travel caused by the COVID-19 pandemic, this number will continue to rise due to increasing global tourism, international conferences, and major sporting events . The potential for public health issues among mass gatherings at these large events should be considered . It is imperative that overseas visitors are able to access information about the health care system of the country they are visiting to reduce risks and enjoy a comfortable stay . Wadhwaniya and Hyder examined how overseas visitors obtained information and where they visited. Some of these visitors were immunized at clinics before travelling to developing countries, even though the health risks were not confined to those countries. There are three main concerns associated with the low level of health information accessed by overseas visitors to Japan. First, overseas visitors tend to be young adults and think they are not very likely to become sick while travelling, but they are at high risk for injury . Only 18% (45 out of 241) of overseas visitors, with a median age of 30–39 years, accessed information about the Japanese health care system in our previous study . As one of the fastest growing host countries, Japan needs to rethink how its health care information will reach overseas visitors, including young adults. Second, the effectiveness of pretravel health issue prevention is dependent upon the presentation and content of the information . Health information for overseas visitors is usually provided through websites, pamphlets, travel books, or visiting clinics in their home countries . Currently, public health authorities of various countries provide health and safety information. This information is located at disparate places and may be inadequate for certain overseas visitors. Furthermore, much of the information is about infectious diseases and immunizations for developing countries . For instance, in our earlier study conducted before COVID-19, we concluded that overseas visitors are most concerned about medical costs, the Japanese language, and informed consent at clinics and hospitals, but there is not enough information to decrease these concerns . Third, although studies have confirmed that educational games are beneficial for sharing health-related information , we have not found educational games that provide health-related information for travellers. Overseas visitors generally consider Japan a developed nation that has a health care system with high standards. However, they do not know how to navigate the Japanese health care system should the need arise. Host nations have an obligation to provide accurate and useful information to overseas visitors about their health care system , illness prevention and procedures to access health facilities in an efficient manner so that overseas visitors are not anxious about visiting other nations . Comprehensive and effective health education methods can convey vital information. Advancements in digital technology are driving changes, and information is now provided in several languages and in various formats . These changes benefit most visitors, including young visitors, who are more likely to be at risk of injury when visiting foreign countries. A digital game is an attractive way to distribute visually and culturally relevant information . In a previous study, a digital game on insulin therapy for children with type 1 diabetes was used, and non-supervised usage of the educational game “L’Affaire Birman” was able to improve insulin titration and carbohydrate quantification results . Another game used by general surgery residents in classrooms showed a significant increase in short- and long-term medical knowledge that was retained, with high learner satisfaction . A separate study on lecturing nursing students showed that an educational game was both liked and accepted by the students and considered a satisfying teaching technique . Digital games can also be used to share information on travel health with overseas visitors. In this research, we will evaluate the effect of a five-minute digital game titled Sa-Chan Japan (Table ). We will examine the levels of satisfaction and motivation of overseas visitors to Japan regarding their educational experience. Study design and procedures We will conduct this randomized controlled trial to examine the efficacy of an animation game in improving both satisfaction and behavioural changes among current and potential visitors to Japan. The participants will complete the survey online. The participants will answer a questionnaire on the satisfaction and behavioural changes prior to and after participating in one of two interventions. We will evaluate the changes in their satisfaction and motivation levels (Fig. ). Participants Sample size In this study, we expect to recruit 1002 participants via a Macromill internet panel. In the sample size calculation, a 95% confidence level and 80% power were used to detect a difference of 0.178 in the questionnaire score, with a standard deviation of 1.0, extra parameter of 0.0 and alpha of 0.05. Eligibility Individuals who are planning to visit Japan from the United States, the United Kingdom and Australia will be recruited by a website. They will indicate whether they are willing to participate in research online through a company. The questionnaire will be in English only to prevent biases regarding interpretation. Individuals who are 18 years old or older, understand English, and have previously visited or wish to visit Japan will be considered eligible to ensure the validity of the responses to our questionnaire. We will consider individuals who are interested in the health care services in Japan. Enrolment procedure We will allocate the participants to either an intervention group or a control group randomly through the Macromill Company services. The participants in this study will be screened by monitors through an online questionnaire. The participants will be able to access the survey site and to receive e-mail notices. The monitors will determine who is enrolled in our study. At the beginning of the questionnaire, the target conditions are explained, and the questionnaire is designed so that only those who meet the conditions stipulated by our research can complete the questionnaire. The participants will be asked about their satisfaction with the Japanese health care system and health information. We will repeat this procedure until we reach our required sample size. We will also ask them to answer the questions based on their current knowledge and to not search for answers by accessing other websites or references. Each participant will receive a one U.S. dollar, one Australian dollar or one Euro gift certificate from Macromill Company upon completion of the eligibility survey in March 2021. Interventions Intervention The intervention group will watch a five-minute digital game titled Sa-Chan Game in English (Table ). This animation is in the format of a quiz that aims to provide information on the health care system and safety in Japan for overseas visitors. Its content is based on the results of a previous study on overseas visitors’ concerns about visiting Japan . It starts with Asian music and contains 11 items. We will share the animated game through a website. Control intervention The control group will watch a four-minute digital animation in English named Mari Info Japan (Table ). The aim is to provide information about the health care system in Japan for overseas visitors . It will last for 4 min and contain 11 items in English. We will provide the digital animation in the same manner as for the intervention group though a website. Outcomes The primary outcome of this study is the difference in the average or median CSQ-8 (8-item Client Satisfaction Questionnaire) score between the participants who will have played the Sa-Chan game and controls immediately after the interventions. We will assess the outcome using a self-administered questionnaire, the CSQ-8 scale, which has been used widely for health education and has been shown to have reliable and valid in previous research . The CSQ-8 is an eight-item questionnaire that uses a four-point Likert scale and will be used to assess the respondent’s level of satisfaction regarding the health care system and safety in Japan. The total score ranges from 8 to 32. A high score denotes greater satisfaction. The reliability of the questionnaire will be examined by Cronbach’s alpha. The second outcome of the study is the difference in motivation between the participants who will have played the Sa-Chan game and the controls immediately after the interventions. We will ask one question, “Are you likely to follow this information yourself”, and the participants will respond using a four-point Likert scale to determine whether they will change their behaviour or assess their level of motivation to follow the Japanese health-related guidelines. We will collect the data before and after the interventions. We will evaluate the data with the Information-Motivation-Behavioural Skills model . This model has been used in a number of risk reduction behaviour studies . The third outcome is whether the participants understood the information presented in the Sa-Chan game. When a participant understands the information corresponding to each of 15 items, he or she will respond with “yes” to the item. The questions will be related to both interventions. If a participant understands how to deal with the following topics in Japan, he or she might choose correct answers. In total, the questionnaire that will be used to assess the outcomes in (1), (2), and (3) and performing the intervention will take less than 10 min to complete. The participants will be visitors or individuals who wish to visit Japan. Other information We will also include in this randomized controlled study basic characteristics of the participants, such as their previous visits to Japan, sex, age, and educational level. We will determine whether the distributions of these characteristics are balance between the two groups and identify factors that might influence the results. We conducted a pilot test with 13 participants at a college in New York on August 12, 2018. Bias prevention The allocation of participants to either the control or the intervention group will be blinded to both the participants and the researchers. Data analysis For the primary outcome of this study, we will analyse the difference in the median scores for the CSQ-8 recorded before and after the intervention between groups with the Wilcoxon signed-rank test. To compare the pre- and post-intervention scores between groups, the Wilcoxon rank-sum test will be used. We will adjust for other potential demographic factors that might affect the results of the multiple regressions. For the secondary outcome, regarding motivation, we will compare the differences in the pre- and postintervention scores for the behavioural change question between groups with the Wilcoxon rank-sum test. The third outcome is related to the participant’s understanding of the Sa-Chan game, which includes health knowledge questions. We will determine whether the answers are related to the characteristics of the participants. All data analyses will be conducted using the JMP statistical package (version 14.0). The answers provided for the open-ended questions about Japanese health information will be examined by word-frequency analysis, with involves a word relationship network and co-occurrence, using the language analysis software Text Mining Studio (version 6.2) . We will conduct this randomized controlled trial to examine the efficacy of an animation game in improving both satisfaction and behavioural changes among current and potential visitors to Japan. The participants will complete the survey online. The participants will answer a questionnaire on the satisfaction and behavioural changes prior to and after participating in one of two interventions. We will evaluate the changes in their satisfaction and motivation levels (Fig. ). Sample size In this study, we expect to recruit 1002 participants via a Macromill internet panel. In the sample size calculation, a 95% confidence level and 80% power were used to detect a difference of 0.178 in the questionnaire score, with a standard deviation of 1.0, extra parameter of 0.0 and alpha of 0.05. Eligibility Individuals who are planning to visit Japan from the United States, the United Kingdom and Australia will be recruited by a website. They will indicate whether they are willing to participate in research online through a company. The questionnaire will be in English only to prevent biases regarding interpretation. Individuals who are 18 years old or older, understand English, and have previously visited or wish to visit Japan will be considered eligible to ensure the validity of the responses to our questionnaire. We will consider individuals who are interested in the health care services in Japan. Enrolment procedure We will allocate the participants to either an intervention group or a control group randomly through the Macromill Company services. The participants in this study will be screened by monitors through an online questionnaire. The participants will be able to access the survey site and to receive e-mail notices. The monitors will determine who is enrolled in our study. At the beginning of the questionnaire, the target conditions are explained, and the questionnaire is designed so that only those who meet the conditions stipulated by our research can complete the questionnaire. The participants will be asked about their satisfaction with the Japanese health care system and health information. We will repeat this procedure until we reach our required sample size. We will also ask them to answer the questions based on their current knowledge and to not search for answers by accessing other websites or references. Each participant will receive a one U.S. dollar, one Australian dollar or one Euro gift certificate from Macromill Company upon completion of the eligibility survey in March 2021. In this study, we expect to recruit 1002 participants via a Macromill internet panel. In the sample size calculation, a 95% confidence level and 80% power were used to detect a difference of 0.178 in the questionnaire score, with a standard deviation of 1.0, extra parameter of 0.0 and alpha of 0.05. Individuals who are planning to visit Japan from the United States, the United Kingdom and Australia will be recruited by a website. They will indicate whether they are willing to participate in research online through a company. The questionnaire will be in English only to prevent biases regarding interpretation. Individuals who are 18 years old or older, understand English, and have previously visited or wish to visit Japan will be considered eligible to ensure the validity of the responses to our questionnaire. We will consider individuals who are interested in the health care services in Japan. We will allocate the participants to either an intervention group or a control group randomly through the Macromill Company services. The participants in this study will be screened by monitors through an online questionnaire. The participants will be able to access the survey site and to receive e-mail notices. The monitors will determine who is enrolled in our study. At the beginning of the questionnaire, the target conditions are explained, and the questionnaire is designed so that only those who meet the conditions stipulated by our research can complete the questionnaire. The participants will be asked about their satisfaction with the Japanese health care system and health information. We will repeat this procedure until we reach our required sample size. We will also ask them to answer the questions based on their current knowledge and to not search for answers by accessing other websites or references. Each participant will receive a one U.S. dollar, one Australian dollar or one Euro gift certificate from Macromill Company upon completion of the eligibility survey in March 2021. Intervention The intervention group will watch a five-minute digital game titled Sa-Chan Game in English (Table ). This animation is in the format of a quiz that aims to provide information on the health care system and safety in Japan for overseas visitors. Its content is based on the results of a previous study on overseas visitors’ concerns about visiting Japan . It starts with Asian music and contains 11 items. We will share the animated game through a website. Control intervention The control group will watch a four-minute digital animation in English named Mari Info Japan (Table ). The aim is to provide information about the health care system in Japan for overseas visitors . It will last for 4 min and contain 11 items in English. We will provide the digital animation in the same manner as for the intervention group though a website. The intervention group will watch a five-minute digital game titled Sa-Chan Game in English (Table ). This animation is in the format of a quiz that aims to provide information on the health care system and safety in Japan for overseas visitors. Its content is based on the results of a previous study on overseas visitors’ concerns about visiting Japan . It starts with Asian music and contains 11 items. We will share the animated game through a website. The control group will watch a four-minute digital animation in English named Mari Info Japan (Table ). The aim is to provide information about the health care system in Japan for overseas visitors . It will last for 4 min and contain 11 items in English. We will provide the digital animation in the same manner as for the intervention group though a website. The primary outcome of this study is the difference in the average or median CSQ-8 (8-item Client Satisfaction Questionnaire) score between the participants who will have played the Sa-Chan game and controls immediately after the interventions. We will assess the outcome using a self-administered questionnaire, the CSQ-8 scale, which has been used widely for health education and has been shown to have reliable and valid in previous research . The CSQ-8 is an eight-item questionnaire that uses a four-point Likert scale and will be used to assess the respondent’s level of satisfaction regarding the health care system and safety in Japan. The total score ranges from 8 to 32. A high score denotes greater satisfaction. The reliability of the questionnaire will be examined by Cronbach’s alpha. The second outcome of the study is the difference in motivation between the participants who will have played the Sa-Chan game and the controls immediately after the interventions. We will ask one question, “Are you likely to follow this information yourself”, and the participants will respond using a four-point Likert scale to determine whether they will change their behaviour or assess their level of motivation to follow the Japanese health-related guidelines. We will collect the data before and after the interventions. We will evaluate the data with the Information-Motivation-Behavioural Skills model . This model has been used in a number of risk reduction behaviour studies . The third outcome is whether the participants understood the information presented in the Sa-Chan game. When a participant understands the information corresponding to each of 15 items, he or she will respond with “yes” to the item. The questions will be related to both interventions. If a participant understands how to deal with the following topics in Japan, he or she might choose correct answers. In total, the questionnaire that will be used to assess the outcomes in (1), (2), and (3) and performing the intervention will take less than 10 min to complete. The participants will be visitors or individuals who wish to visit Japan. Other information We will also include in this randomized controlled study basic characteristics of the participants, such as their previous visits to Japan, sex, age, and educational level. We will determine whether the distributions of these characteristics are balance between the two groups and identify factors that might influence the results. We conducted a pilot test with 13 participants at a college in New York on August 12, 2018. We will also include in this randomized controlled study basic characteristics of the participants, such as their previous visits to Japan, sex, age, and educational level. We will determine whether the distributions of these characteristics are balance between the two groups and identify factors that might influence the results. We conducted a pilot test with 13 participants at a college in New York on August 12, 2018. The allocation of participants to either the control or the intervention group will be blinded to both the participants and the researchers. Data analysis For the primary outcome of this study, we will analyse the difference in the median scores for the CSQ-8 recorded before and after the intervention between groups with the Wilcoxon signed-rank test. To compare the pre- and post-intervention scores between groups, the Wilcoxon rank-sum test will be used. We will adjust for other potential demographic factors that might affect the results of the multiple regressions. For the secondary outcome, regarding motivation, we will compare the differences in the pre- and postintervention scores for the behavioural change question between groups with the Wilcoxon rank-sum test. The third outcome is related to the participant’s understanding of the Sa-Chan game, which includes health knowledge questions. We will determine whether the answers are related to the characteristics of the participants. All data analyses will be conducted using the JMP statistical package (version 14.0). The answers provided for the open-ended questions about Japanese health information will be examined by word-frequency analysis, with involves a word relationship network and co-occurrence, using the language analysis software Text Mining Studio (version 6.2) . For the primary outcome of this study, we will analyse the difference in the median scores for the CSQ-8 recorded before and after the intervention between groups with the Wilcoxon signed-rank test. To compare the pre- and post-intervention scores between groups, the Wilcoxon rank-sum test will be used. We will adjust for other potential demographic factors that might affect the results of the multiple regressions. For the secondary outcome, regarding motivation, we will compare the differences in the pre- and postintervention scores for the behavioural change question between groups with the Wilcoxon rank-sum test. The third outcome is related to the participant’s understanding of the Sa-Chan game, which includes health knowledge questions. We will determine whether the answers are related to the characteristics of the participants. All data analyses will be conducted using the JMP statistical package (version 14.0). The answers provided for the open-ended questions about Japanese health information will be examined by word-frequency analysis, with involves a word relationship network and co-occurrence, using the language analysis software Text Mining Studio (version 6.2) . The study will offer a unique digital education strategy in the form of the game Sa-Chan to overseas visitors to stay healthy and safe. To welcome visitors from other nations, the host country needs to provide practical and useful information in an attractive and effective manner.
Demographic profile of patients seeking teleophthalmology consultations through e-Sanjeevani: Retrospective analysis of 5138 patients from North India
b6370a01-13a6-4e47-89ae-0ec3e00e6920
9940582
Ophthalmology[mh]
This was a cross-sectional study to assess the demographic profile of patients seeking teleophthalmology consultations through the e-Sanjeevani platform. The study was conducted as per the tenants of Helsinki. Ethical clearance was obtained from Institutional Ethics Committee, PGIMER, Chandigarh (INT/IEC/2020/SPL-817). Demographic data of patients seeking teleophthalmology services from May 2021 to February 2022 through e-Sanjeevani at our tertiary care hospital were collected retrospectively. e-Sanjeevani The e-Sanjeevani platform was initiated in August 2020 under National Health Mission (NHM) as a part of the “Digital India” initiative. e-Sanjeevani works on the hub and spoke model, where larger governmental and medical college hospitals in states act as “hubs” and several SCs and PHCs in the periphery act as “spokes.” Doctors at various hubs provide telemedicine services to community health officers (CHO) present at peripheral centers. The e-Sanjeevani platform provides doctor-to-doctor (e-Sanjeevani) and doctor-to-patient (e-Sanjeevani OPD) consultations. e-Sanjeevani application has a doctor’s dashboard where all the details of the teleconsultations made by the doctor can be viewed. Details regarding the number of consultations made, pending calls, and duration of video conferencing completed are displayed . e-Sanjeevani at our tertiary care hospital provides doctor-to-doctor consultations at various SCs, PHCs, and CHCs in various districts of state of HARYANA. e-Sanjeevani is very user-friendly software, wherein CHOs can register a patient, mention about patient’s chief complaints, upload external photographs of the eye, and can even make a video call or use a chat box to contact the doctor sitting at the hub center. After a thorough evaluation of the patient’s complaints and assessing their reports and photographs, the doctor makes a provisional diagnosis. Adequate investigations and treatment are then entered into the application. Finally, a printed report is generated which includes the patient’s complaint, provisional diagnosis, and treatment advised by the doctor . Necessary demographic details related to the patient’s age, gender, residential address, provisional diagnosis, and medicines prescribed were noted. Categorical variables were measured as percentages, and continuous variables were measured as mean. Statistical analysis was not applied due to the descriptive nature of the study. The e-Sanjeevani platform was initiated in August 2020 under National Health Mission (NHM) as a part of the “Digital India” initiative. e-Sanjeevani works on the hub and spoke model, where larger governmental and medical college hospitals in states act as “hubs” and several SCs and PHCs in the periphery act as “spokes.” Doctors at various hubs provide telemedicine services to community health officers (CHO) present at peripheral centers. The e-Sanjeevani platform provides doctor-to-doctor (e-Sanjeevani) and doctor-to-patient (e-Sanjeevani OPD) consultations. e-Sanjeevani application has a doctor’s dashboard where all the details of the teleconsultations made by the doctor can be viewed. Details regarding the number of consultations made, pending calls, and duration of video conferencing completed are displayed . e-Sanjeevani at our tertiary care hospital provides doctor-to-doctor consultations at various SCs, PHCs, and CHCs in various districts of state of HARYANA. e-Sanjeevani is very user-friendly software, wherein CHOs can register a patient, mention about patient’s chief complaints, upload external photographs of the eye, and can even make a video call or use a chat box to contact the doctor sitting at the hub center. After a thorough evaluation of the patient’s complaints and assessing their reports and photographs, the doctor makes a provisional diagnosis. Adequate investigations and treatment are then entered into the application. Finally, a printed report is generated which includes the patient’s complaint, provisional diagnosis, and treatment advised by the doctor . Necessary demographic details related to the patient’s age, gender, residential address, provisional diagnosis, and medicines prescribed were noted. Categorical variables were measured as percentages, and continuous variables were measured as mean. Statistical analysis was not applied due to the descriptive nature of the study. A total of 5138 patients were teleconsulted over 9 months, with an average consultation of 17 per day. The mean age of these patients was 37.64 ± 19.34 years, with 44% males and 56% females. Out of 5138 calls, 382 (7.4%) were wrongly addressed cases and were related to other specializations. Most of the teleconsultation calls were made from Palwal district (19.8%), followed by Hisar (14.5%) and Sonipat. highlights the district-wise distribution of teleconsultation calls made through e-Sanjeevani at our hospital. Dry eye accounted for the majority of the patients (21%), followed by allergic conjunctivitis (18%), cataract (15%), and refractive error (14%). Less common eye problems reported were stye (4.4%), blepharitis (2.3%), congenital nasolacrimal duct obstruction (2.3%), pterygium, subconjunctival hemorrhage (2.2%), periorbital edema (1.4%), and pterygium (1.3%). Rare eye diseases reported were xanthelasma, episcleritis, and acute conjunctivitis. Provisional diagnosis could not be made in 8.9% of cases. highlights the list of provisional diagnoses made through teleconsultations. A majority of these patients could be managed medically on telemedicine (56.6%). Diseases such as cataract, diabetic retinopathy, and optic nerve evaluation required referral to a nearby ophthalmologist for a complete examination, evaluation, and surgical management (11.6%). Furthermore, 21.7% of patients with refractive error or presbyopia were referred to a nearby optometrist for refractive correction. Carboxymethyl-cellulose and olopatadine were the most common topical drugs prescribed. provides a detailed description of drugs prescribed to the patients. e-Sanjeevani was initiated in 2020 under the Ayushman Bharat scheme of the Government of India to provide teleconsultations to patients located in remote areas. Since its introduction, there has been a boost to digital health, with over 1.6 crore consultations done so far. e-Sanjeevani has established itself as a parallel stream of health care services delivery. With over 1 lakh doctors and paramedics on board, teleconsultations are provided in various specialties such as medicine, pediatrics, ENT, ophthalmology, psychiatry, dermatology, orthopedics, and obstetrics and gynecology. The index study looked at the demographic details of patients seeking teleophthalmology consultations at a tertiary hospital in north India. The mean age of our patients was 37.64 ± 19.34 years, with 44% males and 56% females. Similarly, in a study by Verma et al. , the predominant age group seeking teleophthalmology consultations was between 21 and 40 years of age. This highlights the fact that most of these patients are the primary wage earners of the family and find it difficult to leave their hometown and travel to get medical treatment at a district tertiary hospital. This also highlights the fact that older people are more ignorant about their health and prefer to stay home or try home remedies. On the contrary, young people in today’s time are more aware and educated and thus tend to seek medical advice early. There was an unequal distribution of teleconsultation calls, with the majority of the calls made from districts of Palwat, Hisar, and Sonipat, and no calls were received from centers located in Panipat, Rohtak, Nuh, and Charkhi Dadri. Lack of manpower, lack of public awareness about available teleconsultation services, and poor network connectivity may be a few reasons for unequal call distribution. These factors should be urgently addressed by the local authorities to allow widespread distribution of teleconsultation services across the state of Haryana. Our study showed that anterior segment problems such as dry eyes, allergic conjunctivitis, stye, and blepharitis can be easily diagnosed and managed using the e-Sanjeevani platform. Similarly, Verma et al . have shown the feasibility of a teleophthalmology setup to diagnose and manage patients with adnexal and orbital problems. In a study by Misra et al. , lens-related (38.3%) and ocular surface pathologies (30.2%) were the most common diagnosis made. The use of the eyeSmart EMR application along with slit-lamp examination allowed vision technicians to capture good-quality anterior segment pictures. This probably explains the high likelihood to diagnose cataract as compared to our study, where slit-lamp examination was not possible. Patients diagnosed with refractive error were referred to an optometrist. Patients requiring detailed evaluation for cataract, diabetic retinopathy screening, or optic nerve evaluation were referred to a nearby ophthalmologist. Fundus evaluation could not be done due to the lack of fundus cameras at the peripheral centers at present. The e-Sanjeevani application allows the doctor to select a range of medicines available at the peripheral centers. The application allows the doctor to select the frequency, dosage, mode of drug delivery, and duration of treatment. The prescribed drug can then be easily explained and dispensed to the patient by CHO. Carboxymethylcellulose and olopatadine hydrochloride were the most common topical medications prescribed. The majority of our patients (56.6%) could be managed through teleconsultations, and the rest of them were referred to an ophthalmologist. The referral rate was higher in a study by Misra et al . This was probably because of more number of cataract patients being diagnosed by them using the eyeSmart app and thus more referrals to higher centers. Teleophthalmology is an effective tool to triage urgent referrals such as trauma, chemical injuries, and retinal detachments. This allows for better structural and functional outcomes in such cases. Unnecessary referral of patients which can be managed easily at SCs, PHCs, and CHCs through teleconsultation adds to treatment costs, including transportation charges, and patient burden at tertiary care centers. Apart from this, teleconsultation helps in providing health education to the patients as well as the health care providers at the primary and community health center level. Interdisciplinary opinions can also be taken among various other departments as in the case of polytrauma. In today’s era of the ongoing pandemic, teleconsultation provides a channel to safeguard both the patient as well as the physician. The provision of audio and video conferencing in e-Sanjeevani allows the doctor to interact with a patient and understand his complaints in a better way. It is similar to live face-to-face interaction. In addition, if audio-video conferencing is not possible due to poor network connectivity, the CHOs have the provision to upload the images of the patient. The images can be captured using a smartphone camera and then uploaded along with the patient’s case sheet. This allows the doctor to analyze the images and reach a provisional diagnosis. demonstrates various clinical presentations where the diagnosis was possible based on the clinical photograph clicked on a smartphone camera by CMO. Our study had a few limitations. First, retinal details could not be assessed in any of our patients due to the lack of a fundus camera at peripheral SCs, PHCs, and CHCs. Similarly, fine details of the anterior segment were not possible due to the lack of any slit lamp-based imaging devices. Despite this, suspected patients with retinal and uveitic diseases were urgently referred to a nearby ophthalmologist, thus avoiding any delay in treatment. Second, poor network connectivity in some areas did not allow good audiovisual conferencing and thus a definitive diagnosis was not possible. To conclude, e-Sanjeevani is an effective tool in establishing an ocular diagnosis and providing timely intervention. It is useful in providing teleophthalmology consultations to remote areas, thus overcoming the barriers of distance, time, and cost. Future developments in technology and the introduction of slit lamp-based and fundus cameras would allow doctors to assess anterior segment and fundus details in a better way and thus triage and treat the patients accordingly. Financial support and sponsorship Ayushman Bharat Scheme, National Health Mission, Haryana, India. Conflicts of interest There are no conflicts of interest. Ayushman Bharat Scheme, National Health Mission, Haryana, India. There are no conflicts of interest.
Essentials in Minimally Invasive Gynecology Manual Skills Construct Validation Trial
98375242-6f71-4d32-acc1-7b89d3e9cc02
7316146
Gynaecology[mh]
Messick's validation framework was chosen for Essentials in Minimally Invasive Gynecology to remain consistent with a previous systematic review on FLS and because it is advocated by the American Educational Research Association, the American Psychological Association and the National Council on Measurement in Education. , The design was a prospective cohort study comparing two sets of two groups of participants based on their level of training and their self-reported experience with both hysteroscopic and laparoscopic surgery and surgical simulation. The first two groups comprised novice (postgraduate year [PGY]-1) and mid-level (PGY-3) residents, each in the first 100 days of their training year; the second pairing included those considered proficient (ABOG-certified obstetrician–gynecologists [ob-gyns]) and experts who were 2-year fellowship–trained in minimally invasive gynecologic surgery. The simulators used in this trial (Appendices 2–4, available online at http://links.lww.com/AOG/B925 ) have been described previously, as have the seven exercises—five laparoscopic (L-1 to L-5), and two hysteroscopic (H-1 and H-2). Maximum times allocated for each of the laparoscopic exercises were determined by evaluating previous performances in the pilot study. No maximum time was required for the hysteroscopic skills, because all participants completed these in the pilot. A 13.5-minute orientation video was provided to standardize exposure to both systems and each of the seven exercises. Then, assisted by the orientation proctor, participants had approximately 45 minutes of structured orientation to the systems and were provided videos of each exercise. The participants performed all of the laparoscopic exercises while standing on their preferred side of the simulator to replicate standard positioning alongside the operating table. The two hysteroscopic exercises were performed in a sitting position. Before the timing started, the testing proctor, blinded to the trainee status and different from the orientation proctor, described the exercises and the measured parameters with the aid of a laminated instruction card. The following study exercises were performed: L-1. Laparoscopic sleeve-peg transfer (Fig. A, ) The participant was supplied with two laparoscopic Maryland grasping forceps to transfer six cylindrical sleeves from the floor of the Essentials in Minimally Invasive Gynecology LaparoBowl to one of six peg targets located on five contiguous panels and then back to the original location. The participant's exercise time was calculated, as were potential errors such as dropped sleeves and failure to properly execute a transfer. The maximum allowable time was 330 seconds. L-2. Laparoscopic pattern cut (Fig. B, ) This task required that the participant use laparoscopic scissors and a Maryland grasper to cut a circular pattern from the top layer of a double layer surgical gauze mounted on the LaparoBowl. The participant's exercise time was calculated, as were errors such as crossing lines, cutting both layers and either avulsing the gauze from the platform or the platform from the LaparoBowl. The maximum allowable time was 300 seconds. L-3. Laparoscopic extracorporeal tie (Fig. C, ) Participants used a standard Fundamentals of Laparoscopic Surgery laparoscopic needle driver and another grasping instrument to pass a 90-cm 2-0 silk suture with a swedged-on tapered and curved needle through marks on a short, fenestrated portion of Penrose drain affixed at a 45-degree angle to a sponge block placed on the floor of the LaparoBowl. The linear defect was approximated with a knot comprising three, single, extracorporeally formed throws, each sequentially transferred into the trainer and tightened with a knot manipulator. The exercise was completed by cutting both ends of the suture with the laparoscopic scissors. The maximum allowable time was 600 seconds. The specimen was evaluated for apposition of the fenestration's edges and knot formation. L-4. Laparoscopic intracorporeal knot (Fig. D, ) The participant was provided laparoscopic scissors and the same choices of needle driver or grasper offered in L-3. The target and its configuration were also identical, but the suture was a 15 cm length 2-0 braided polygalactin construct with a swedged-on and tapered curved needle. After passing the suture through the marks on the Penrose, the defect was approximated with a knot comprising three intracorporeally formed throws, the first of which was a double throw; the exercise ended when the suture was cut. The maximum allowed time and errors recorded were identical to those for L-3. L-5. Laparoscopic running suture (Fig. E, ) The participant was asked to use laparoscopic instruments and a running 20-cm 2–0 polygalactin suture with a swedged-on curved needle to approximate the long fenestration in a piece of Penrose drain marked with five pairs of black “targets and attached to three contiguous internal panels of the LaparoBowl.” Exercise time was recorded as were targeting, approximation, avulsion and other errors. Maximum allowable time was 600 seconds. H-1. Hysteroscopic targeting (see Appendix 2, http://links.lww.com/AOG/B925 , ) The participant was provided a hysteroscope with a 30-degree lens prepositioned in a sheath that included a 5-Fr operating channel containing a specially designed 4 Fr probe. The participant was asked to identify, properly orient and then use the probe to depress the 10 numbered targets in the simulated endometrial cavity in the order announced by the testing proctor. Exercise time and errors were recorded. H-2. Hysteroscopic foreign body (or polyp) removal (see Appendix 2, http://links.lww.com/AOG/B925 , ) The participant was provided the same hysteroscope assembly as described for H-1 but with grasping forceps prepositioned in the operating channel. The forceps were used to grasp, detach and remove 10 small silicone “polyps” from the simulated endometrial cavity in the order announced by the testing proctor. Elapsed time and errors including dropped targets were recorded. Approval for the overall study was obtained from the ACOG's institutional review board, Approval No. 38. Each of the academic sites also received approval from their local human participants institutional review board or appropriate ethics committee. Candidates at the academic medical centers were recruited by the local Principal Investigator and asked to complete the anonymous web-based survey that included a scoring system designed to identify appropriate participants for the four cohorts. A score of 0 meant no exposure to simulation or surgery, and a score of 1 meant minimal exposure to diagnostic procedures and no surgical experience (Appendix 5, available online at http://links.lww.com/AOG/B925 ) (Fig. ). The cohorts were defined both by level of training and self-described exposure to hysteroscopic and laparoscopic surgery and surgical simulation. Qualified candidates were provided a unique study number that allowed anonymous participation and storage and analysis of data. The key linking the study number to the participant's identification and contact information was stored securely in a web-based electronic database, with access limited to the Principal Investigator and selected study personnel. The three study proctors were selected based on a spectrum of skills and knowledge as well as performance in the pilot study. Each was trained to be proficient at both assembling and troubleshooting the study systems that comprised mechanical and endoscopic equipment as well as computer hardware and software. Proctors were made responsible for maintaining the integrity of study data and protocols, including candidate orientation, conduct of study exercises and appropriate acquisition, labeling and storage of videos and test specimens. Each proctor was trained to use the data-entry systems, which comprised electronic tablets with touchscreens that allowed for secure web-based data entry. The study psychometrician observed the activities at the first study site and then participated in subsequent review and discussions, as appropriate. In each of the locations or participating centers, the test environment comprised three dedicated areas: one for the structured orientation process and two for system orientation, participant testing, and data acquisition, all made free of potential disturbances. All study data, including participant qualification questionnaires and testing data, were stored on a cloud-based server using Secure Socket Layer for data transmission, data storage encryption, and password-based access for data retrieval. The server was designed with data storage redundancy to protect against possible data loss. Access to the study data was strictly limited to the principal investigator, the co-principal investigator, the statistician, the psychometrician, and the selected AAGL staff who were directly assigned to the Essentials in Minimally Invasive Gynecology project. The deidentified data were provided to the statistician and the psychometrician during data analysis and reduction to provide the final data and statistical products to the investigator group led by the principal investigator and co-principal investigator. The testing proctors were trained to measure and, when necessary, rate the performance exercises in the field, entering data that included exercise times, accuracy measurements and categorical variables into the appropriate electronic data forms. For participants who failed to complete a given exercise in the predetermined maximum time, the exercise was stopped, the maximum allowable time in seconds entered and the exercise designated as “did not complete.” Where applicable, study specimens were measured for both targeting accuracy and completeness, with data recorded in the electronic system. For later central review, study specimens were photographed and then sealed in containers labeled with the participant’s unique identification number. These included pattern-cut materials from L-2 and the Penrose targets for each suturing exercise; L-3, L-4, and L-5. The central review was performed at the study center by a different trained proctor who evaluated both the video capture and the stored specimens. If there was a time discrepancy of more than 5 seconds or discrepancies in metrics related to accuracy, a third review was performed by two proctors designed to resolve differences. In such instances, a “final” score was obtained for each of the parameters by consensus. The primary outcome was distinction between the novice and mid-level participants, and secondary outcomes included comparisons of the proficient and expert cohorts. For the pairwise primary and main secondary outcomes, the statistical plan was designed to determine whether any significant differences in scores and times could be observed between two levels of training. The sample size calculation was based on the need to analyze data from at least 30 participants to meet the assumptions of a normal curve under the central limit theorem, thus allowing each cohort to stand as its own normal distribution. To allow for attrition and the possibility of incomplete data or data errors, the target for each group recruitment was increased to 40. Captured simulation data were entered into an Excel spreadsheet. Using Excel, 95% CIs were constructed around the differences between the novice and mid-level means and the proficient and expert means for each scored simulation outcome. Each difference was subjected to hypothesis testing, where H 0 : x 1 −x 2 =0 and H 1 : x 1 −x 2 >0. Thus, any CI around a difference in means not including 0 would indicate a difference in cohort performance significant at the α=0.05 level. A total of 227 participants (77 novice, 70 mid-level, 33 proficient, and 47 expert) were enrolled from two AAGL sites and from 13 academic centers representing all five ACOG regions (Table ). Participants were divided into cohorts by training status, self-reported exposure to laparoscopic and hysteroscopic surgery and related surgical simulation. For the second, and, if necessary, the third and final analysis it was necessary to have complete video for the exercise as well as the study specimens. Full data were available on 67 novices, 61 mid-levels, 32 proficients, and 41 experts. These 201 participants comprised the evaluable study cohort, with the minimum sample size exceeded for each of the categories. Times for the five laparoscopic exercises (L1-5) by group are shown in Figure and can also be seen in Table . In general, the mean times in seconds (±SD) for the mid-level participants for all exercises were significantly less than for the novices: L-1, 187 (±45) vs 256 (±59); L-2, 232 (±55) vs 274 (±38); L-3, 284 (±107) vs 344 (±101); L-4, 376 (±141) vs 481 (±126); and L-5, 420 (±100) vs 494 (±106). For those in the expert group, exercise times were less than for the proficient participants: L-1, 138 (±30) vs 199 (±50); L-2, 164 (±51) vs 245 (±51); L-3, 182 (±62) vs 294 (±104); L-4, 178 (±71) vs 402 (±156); and L-5, 239 (±76) vs 386 (±135). Many participants were not able to complete a given laparoscopic exercise in the allotted time and, in such instances, were assigned the maximum time allocated for the given exercise for the purpose of calculating the exercise completion times. These did-not-complete rates are shown in Figure . The novice cohort had high failure to complete rates in all but the extracorporeal knotting exercise (L-3), and for the mid-level and proficient groups, failure to complete rates were greater than 15% for the Circle Cut (L-2) and intracorporeal knot tying (L-4) exercises. Only the expert group had negligible failure to complete rates for all five laparoscopic exercises. Candidates were also assessed for precision and accuracy in each of the five laparoscopic exercises (Table ). For these calculations, only “completed-the-task” elements were used for the denominator, which varies according to task. For example, for L-1, dropped or incorrect transfer of sleeves were recorded. In this case, the mid-level group averaged the fewest incorrect transfers, the expert group had the lowest frequency of dropped sleeves, and novices had the highest frequency. Pattern cut (L-2) errors comprised crossing one or both lines and cutting the bottom layer. Performance here demonstrated a high degree of accuracy in the expert group and relatively frequent errors for the novices; the mid-level and proficient groups were similar. Few in any group erroneously cut the bottom layer (not shown). Extracorporeal (L-3) and intracorporeal (L-4) errors comprised accuracy (target entrance and exit errors); tissue handling (avulsion and tear through errors, not shown); and knot construction (“air knots” and square knot errors). These outcomes were similar to other groups where, in general, the mid-level and proficient participants were similar but superior to novices, and the expert group scored highest overall. For some outcomes, there were similarities between the experts and others, including errors in square knot formation and the creation of one or more “air knots.” Running suture (L-5) accuracy for the 10 targets is shown in Appendix 6 (available online at http://links.lww.com/AOG/B925 ) and Table . In this exercise members of the mid-level cohort were more accurate than the novice group, and the expert group had greater overall accuracy (sum of errors) than the proficient cohort. The mean exercise completion times for the two hysteroscopic exercises chosen for this trial are shown in Figure , as well as Table . The hierarchy of completion times for H-1 and H-2 was similar to that demonstrated in the laparoscopic exercises, but the differences, although significant, were not as profound. The novice group had longer mean completion times in seconds (±SD) than the mid-level group: H-1, 176 (±56) vs 141 (±48); and H-2, 200 (±96) vs 150 (±37). The proficient cohort had longer times than the expert group: H-1, 141 (±52) vs 117 (±33); and H-2, 138 (±33) vs 120 (±31). The expert cohort performed better than the other three and the mid-level and proficient groups had similar times for each of the two exercises. Unlike the laparoscopic exercises, virtually all of the participants in each group completed each of the two hysteroscopic tasks, and targeting errors were similar (Table ). The Essentials in Minimally Invasive Gynecology simulation-based assessment demonstrates multiple sources of evidence supporting construct validity. The unique nature of being gynecologic surgery-specific allowed the study to reflect the construct it was intended to measure—laparoscopic and hysteroscopic skills. Response process validity evidence is supported through a rigorous review process of the tasks incorporating quality control and an evaluation of rater discrepancies to align the scores with the intended construct. The availability of video capture and centrally stored specimens allowed for a structured remote review and analysis of all participants by more than one rater, a circumstance that largely removed observer error as a confounder. The Essentials in Minimally Invasive Gynecology simulation-based assessment distinguished a spectrum of performance outcome variables between groups, thereby supporting the interpretation that performance is related to participants' training level. Differences were demonstrated for both the primary outcome comparisons of novice (PGY-1) compared with mid-level (PGY-3) residents and the principal secondary outcome comparison of ABOG-certified proficient gynecologic surgeons without subspecialty training compared with those expert ABOG-certified ob-gyns who had completed an accredited 2-year fellowship in minimally invasive gynecologic surgery. These outcomes are even more discrepant when considering the high frequency of “failure to complete” tasks in the novice cohort, a circumstance that was also present to a degree in the mid-level and proficient groups, at least when compared with the expert cohort. These differences were generally lessened because calculation of both time and accuracy could not be performed if an exercise was not completed, a circumstance that occurred more often with decreasing level of training. Interestingly, the differences in performance were more pronounced for the five laparoscopic than the two hysteroscopic exercises. The reasons for this observation are not clear but may be related to a relatively lower degree of difficulty of hysteroscopic exercises. The similar performance of the mid-level residents compared with the proficient cohort could reflect some selection bias. The mid-level residents were all from centers where there was an interest and experience in simulation-based medical education, usually with an established fellowship in minimally invasive gynecologic surgery program, which may not be representative. Expert surgeons performed at a superior level in time and accuracy in almost all of the categories. The strengths of this validity trial include the large sample size from multiple centers that comprised regionally representative training programs. The rigorous design included field and centrally performed analysis of both time and measures of accuracy. In addition, the study used a contemporary validity framework describing how assessment interpretations can support defensible decisions, whereas the majority of the studies described by Cook et al in a meta-analysis used “an outdated or incomplete framework to interpret validity data, if they used any framework at all.” , There are limitations to the study. As stated, there could be selection bias, particularly for the mid-level cohort, for which training, including simulation-based medical education, may have been more robust than the average U.S. program. The proficient cohort, though meeting the sample size requirements, was nonetheless less regionally representative, and, consequently, performance characteristics might vary with a larger sample from a broader geographical spectrum. A larger study including community programs without a fellowship in minimally invasive gynecologic surgery could provide additional insight. It is also possible that the posttraining experience of the proficient and expert cohorts contributed to differences in performance. Although this variable was estimated in the survey, a more rigorous evaluation might provide evidence of a more granular nature. The long-term significance and consequence evidence of the Essentials in Minimally Invasive Gynecology simulation-based assessment was neither evaluated nor defined at the outset of this study. Consequence validity evidence examines the intended and unintended implications of deciding to use an assessment on downstream outcomes (eg, health systems, surgical practice variation, surgical outcomes, and what is not taught in order to learn how to pass the assessment). Despite its importance, consequence evidence is rarely reported as part of health professions' education validity studies (5–20%) and is lacking for utilization of the Fundamentals of Laparoscopic Surgery examination system for obstetrics and gynecology trainees. , As future studies are developed, consequence evidence will be an important factor when determining whether to continue to require Fundamentals of Laparoscopic Surgery or adopt modified or new high-stakes simulation-based assessments that have stronger content validity evidence such as Essentials in Minimally Invasive Gynecology. Providing a context for the development and assessment of both laparoscopic and hysteroscopic skills is an important educational element, perhaps critical for safe and effective contemporary gynecologic surgery. However, and despite its adoption as a criterion for ABOG certification, there has been only one published assessment of the construct validity of the Fundamentals of Laparoscopic Surgery platform among trainees in gynecologic surgery. Investigators showed that comparative performance in standardized tasks using the Fundamentals of Laparoscopic Surgery box simulator could distinguish skilled from unskilled surgeons. There are still relatively few hysteroscopic simulation articles available in the literature published since the mid-1990s (Wallwiener D, Rimbach S, Aydeniz B, Pollmann D, Bastert G. Operative hysteroscopy: results, security aspects, in vitro simulation training (hysterotrainer) [abstract]. J Am Assoc Gynecol Laparoscopists 1994;1:S39; and Lefebvre Y, Cote J, Lefebvre L. Teaching surgical hysteroscopy with a computer [abstract]. J Am Assoc Gynecol Laparoscopists 1996;3(Suppl 4):S25). Only a limited number, and with variable quality, have evaluated construct validity, – and none have demonstrated predictive validity. , Those described in the literature are usually assessments of virtual reality devices that, because of cost, may limit access for most trainees and training programs. The Essentials in Minimally Invasive Gynecology Hysteroscopy Simulation System is a “low-fidelity” modular system that may provide an opportunity to expand this critically important aspect of gynecologic simulation because of its dramatically reduced cost compared with virtual reality systems.
A framework for on-implant spike sorting based on salient feature selection
e8f884d1-b944-40bb-8824-fc6ef71d4189
7327047
Physiology[mh]
In the realization of brain implants and neural prostheses, one of the main challenges is to increase the number of recording channels. This is mainly because of the significant increase in the need for power consumption, data telemetry bandwidth, and also enlarged physical dimension of the neural recording implant. On-implant spike sorting is one of the possible steps towards overcoming such challenges by efficient data reduction. Generally, spike sorting can be performed through the following general steps: (i) filtering the raw neural signal (from 0.3 to 6 kHz) to preserve only the useful frequency content of neural spikes, (ii) detection of spike events upon the firing of neurons, (iii) extraction of spike wave-shapes from the filtered neural signal (for details of our spike detection and extraction method, refer to our previous work ), (iv) temporal alignment of the spike wave-shapes, to avoid additional hardware cost, in this work spike wave-shapes are aligned to the detection (first threshold-crossing) points, (v) mapping of the extracted spike wave-shapes into a feature space, known as feature extraction, this step is to enhance the discrimination between spikes and noise, and also between different spike classes (also referred to as between-class variability), (vi) selection of a minimal subset of features, known as feature selection, in order to reduce the dimensions of the data being processed, and (vii) classification or clustering of the wave-shapes into different spike classes as isolated units. From the standpoint of computational load (and consequently hardware complexity), most of the traditional spike sorting algorithms are too heavy to be implemented on neural recording implants. To efficiently realize spike sorting on such implants, one solution is to reduce the dimension of the data being recorded. For on-implant online spike sorting, peak values and timings – , and zero-crossing points have been selected as simple and informative geometric features to sort spike classes. Furthermore, to enhance the discrimination between different spike classes, hardware-efficient mathematical transforms such as derivative transforms – , , , – and four-level Haar wavelet transform , have also been used for feature extraction on brain implants. To make the spike sorting procedure complete, on-implant classification of spike wave-shapes has been realized using distance-based classification – and oblique decision tree classification methods , . System description In this work, we propose an automated method for online spike sorting dedicated to high-density, high-speed brain implants. The proposed method needs to be simple, agile, and reconfigurable, and at the same time should be physically implemented in compliance with physical and electrical limitations of brain implants. Computational load of the existing spike sorting procedures usually entails technical challenges such as hardware complexity, power consumption, and computation speed. The technique we propose overcomes these challenges by shifting the computational load from the implant to the external side of the system where complexity of the algorithm and its hardware implementation is not as important. Traditionally, fully implantable neural recording systems comprise an implantable module and an external module communicating with each other via a wireless link. As depicted in Fig. , the miniaturized implantable module records intracortical neuronal activities on multiple channels using a penetrating microelectrode array. The implantable module is in charge of the recording of neuronal activities. The external module, in general, communicates with the implantable module through bidirectional wireless communication. It receives the recorded information from the implant, stores the recorded data, performs signal processing tasks, and possibly sends data/configuration information back to the implant. The proposed spike sorting framework A conceptual block diagram of the system, on which the proposed method is realized is shown in Fig. a. In this scheme, the implantable module records neuronal activities, runs digital signal processing procedures, and finally performs online unsupervised spike sorting. The external module is in charge of the calculation of the parameters using which the on-implant online spike sorter (OSS) is configured and calibrated. To significantly reduce computational and hardware complexity on the implant, the proposed spike sorting method is divided into two phases: an offline initial training phase implemented on the external module, and the main online spike sorting phase realized on the implant. In the real time and with area- and power-efficient hardware on the implantable module, what remains on the implant is a compact, low-power, and agile OSS, which is configured using the results of the offline training phase received from the external module. The key value of the proposed spike sorting technique is in its potential to allow for a power- and area-efficient hardware implementation that operates in the real time on a high-density neural recording implant. Prior to the start of the operation of the on-implant OSS, first, the system telemeters neuronal activities (spikes) on all the channels to the external module through wireless connection. An unsupervised offline spike clustering block (based on the silhouette statistic – and k-means clustering algorithm ) on the external module then labels the spikes received from implantable module. As shown in Fig. b, a shadow spike sorter on the external module (which includes an identical model of the on-implant OSS) receives both the spikes and the associated labels, and is optimized to perform the proposed spike sorting algorithm. The OSS model parameters are then sent to the implantable module in order to configure the on-implant OSS. After configuration, the on-implant OSS will be able to perform spike sorting on the live stream of the neural signals being recorded. Salient feature selection As will be discussed later in this paper, existing spike sorting techniques commonly use specific geometric features for spike wave-shape isolation. Although such features offer the advantage of straightforward mathematical formulation and rather simple hardware implementation, they do not necessarily guarantee maximal discrimination between spike classes. The proposed online spike sorting technique is based on finding a minimal set of geometric features, hereafter referred to as salient features, that maximize the discrimination between spike classes. Each and every spike class is discriminated from all other spike classes (multi-label classification ) using a subset of salient features in the salient feature space. As a measure for the extent of the overall separation of the class of interest (# i ) from all other classes, saliency of that class ( ς i ) is hereby defined in such a way that it expresses both its discrimination from all other classes and the extent of the homogeneity of the distribution of all other classes (with respect to class # i ). To quantify the saliency of class # i , the former is measured by the geometric mean of the associated distances, and the latter is quantified as the ratio of the geometric mean to the arithmetic mean of the same distances. It should be added that according to the definition presented in Beauchemin et al. and Woodhouse et al. , the signal space is considered homogeneous with respect to a certain class if that class is equally separated from each and every other class in the signal space. In the simplest scenario, where the feature space is one-dimensional, the saliency of class # i ( ς i ) is, therefore, formulated as 1 [12pt]{minimal} $${ }_{i}=_{j = 1(j i)}^{{N}_{c}}(.{d}_{ij}{).}^{{P}_{j}})}^{2}}{{ }_{j=1(j i)}^{{N}_{c}}{P}_{j} {d}_{ij}},$$ ς i = ∏ j = 1 ( j ≠ i ) N c d i j P j 2 ∑ j = 1 ( j ≠ i ) N c P j × d i j , where i is the index of the class of interest, d i j is the class # i and # j discrimination index (refer to “Methods”), P j is the relative probability of class # j , and N c is the total number of classes. In general, the concept of class saliency can be extended to a K -dimensional space. For the k th feature ( k = 1, 2, . . . , K ), σ i [ k ] is defined to express the saliency of class # i from all other classes (refer to “Methods”). From among all the features in the K -dimensional feature space, the most salient feature (MSF), [12pt]{minimal} $${k}_{i}^{1}$$ k i 1 , is introduced here as the feature that distinguishes class # i from all other classes with the highest possible class saliency. This is, indeed, the first member of the salient feature set, determined by spanning the entire ( K -dimensional) feature space. Index of the MSF for the class # i is, therefore, expressed as 2 [12pt]{minimal} $${k}_{i}^{1}=}\, {}}_{ \{1, 2, ..K\}}\{_{i}[ ]\},$$ k i 1 = arg max κ ∈ { 1 , 2 , . . K } ς i [ κ ] , Selected from among the remaining K -1 features in the feature space, the second MSF (2nd MSF), [12pt]{minimal} $${k}_{i}^{2}$$ k i 2 , is the most uncorrelated feature to the MSF ( [12pt]{minimal} $${k}_{i}^{1}$$ k i 1 ) that best isolates class # i from the rest of the signal space. The 2nd MSF is mathematically determined as 3 [12pt]{minimal} $${k}_{i}^{2}=}\, {}}_{ \{1,2,..K\}}\{_{i}[ ] (1-{ }_{i}( ,1))\},$$ k i 2 = arg max κ ∈ { 1 , 2 , . . K } ς i [ κ ] × 1 − ρ i ( κ , 1 ) , where ρ i ( κ , 1) indicates the correlation between the κ th feature in the feature space and the 1st member of the salient feature set, i.e., the MSF ( [12pt]{minimal} $${k}_{i}^{1}$$ k i 1 ). The term 1 − ρ i ( κ , 1) in Eq. is to ensure that the information redundancy in the salient feature space is eliminated or at least reduced. In general, the process of selecting L features of the highest saliency (i.e., forming an L -dimensional salient feature space for a given spike class, as formulated in “Methods”) is referred to as the salient feature selection (SFS) process. In a given spike sorting problem, the value of L is determined by the user as a result of a tradeoff between hardware cost and the achieved classification accuracy (CA). Figure a illustrates a spike classification problem with three isolated units, mean values of which are shown using red, green, and blue solid lines ( μ 1 , μ 2 , and μ 3 , respectively). Here, each spike sample is taken as a feature. Hence, assuming that a neural spike is expressed using K samples, the main feature space consists of K feature dimensions. The SFS method is now used for the generation of a salient feature set, by selecting the samples that best discriminate each spike from other spikes. The MSFs (samples) shown with circles on each spike class are indeed the ones that provide the highest saliency. Figure b–d present the details of the SFS process in the case of this neural spike classification problem, in which the horizontal axis is the “Feature Index ( k )”. The geometric means and homogeneities of d i j ’s associated with the three spike classes are shown, respectively in Fig. b, c based on which class saliencies are calculated and plotted in Fig. d. According to the subplots shown, saliency of a unit with respect to the others ( ς i ) peaks when the product of the geometric mean and the homogeneity associated with that unit is reasonably large. To evaluate and validate the success of the proposed concept of saliency in order to form an efficient feature set (i.e., the salient feature set), we use the Bayes classifier to complete the spike classification process. We are to show that there is a strong correlation between the saliency of the features used for classification and the CA achieved. The scatter plot in Fig. a presents the saliencies (calculated based on Eq.  in “Methods”) versus the Bayes CA for the three classes shown in Fig. a. It can be obviously seen in this figure that the chance level of each class (illustrated by dotted lines) contributes to the associated accuracy of classification. However, this contribution can be somehow misleading when evaluating and comparing classification accuracies for classes of different chance levels. To have a fair comparison, we therefore eliminate the influence of the chance level from the CA. Hence, as a modified measure, the chance-level-independent CA (CA CLI ) is proposed as 4 [12pt]{minimal} $${}_{}=-}{1-\,},$$ CA CLI = CA − Chance 1 − Chance , in which CA is the classification accuracy with its conventional definition, and Chance is the chance level associated with each class. The same comparison after the elimination of the chance level from the classification accuracies of the clusters is presented in Fig. b. The distribution of the data points in this plot is an indication of a meaningful statistical correlation (correlation coefficient ≈0.8) between the log-saliency of features and the CA CLI they result. Therefore, it can be concluded that the saliency metric proposed in this work is an efficient criterion for the selection of a subset of features for successful classification. Window discrimination For online on-implant spike sorting in this work, we use a window discrimination (WD) method in the salient feature space. The WD method benefits from efficient hardware implementation, which is of crucial importance in the design of brain-implantable microsystems. In order to classify each and every unit, one discrimination window is assigned to each class in the associated salient feature space. Specifications of the four borders of each window are determined in the offline phase, and are subsequently stored in the on-implant spike sorter for online spike classification. Number of dimensions in the salient feature space is an important aspect of the feature selection method that needs to be decided upon. In this work, a salient feature space with two dimensions leads to satisfactory isolation of units with the accuracy of 87.6% or higher. For the classification problem under study with three classes of unit activities shown in Fig. a, a two-dimensional (2D) salient feature space is defined for each class of unit activities. In this case, multi-label discrimination windows in the associated salient feature spaces are shown in the scatter plots of Fig. . In these plots ( s[13] and s[19] ) are the salient features used for the classification of unit #1, and ( s[9] and s[15] ) and ( s[12] and s[22] ) are the salient features used for the classification of unit #2 and unit #3, respectively. In these scatter plots, the coordinates of the (upper bound and lower bound) for the discrimination windows associated with units #1, #2, and #3 are {(−28, 70), (−3, 103)}, {(−11, 147), (−171, 185)}, and {(54, 222), (−133, 4)}, respectively. Performance evaluation For all the tests presented in this section, the open-access data set of prerecorded spike wave-shapes is used to generate the data required for both training and testing. This data set were recorded by a 10 × 10 Utah array from populations of neurons in primary visual cortex (V1) of macaque monkeys ( Macaca fascicularis ) in response to natural images. From this data set, we generated a library of ~15,000 different spike classes. Each spike class consists of hundreds of spikes with signal-to-noise ratios ranging from ~0.3 to ~22 (with the average value of ~4.5). Sampling rate of the recordings is 30 k sample/s, with the resolution of 8 bits. All the spikes extracted for classification are 48 samples long (1.6 ms). For each trial, a random selection of ~1450 spike classes (units) are chosen. From this “trial library”, two, three, or four units are used to train and test each channel. For each and every channel, the spikes under each unit are used for training and testing with a breakdown of 50–50%. To evaluate the efficacy of the idea of salient features and the SFS approach proposed, the results achieved in this work are compared with the other major feature extraction/selection methods already appeared in the literature. On-implant spike sorting methods normally use specific features with straightforward mathematical descriptions to classify spike wave-shapes (referred to as static methods in this work). The static techniques used for comparison include peak-to-peak amplitude of the spike and min-max of its derivative, hereafter referred to as spike and derivative extrema (SDE) , first and second derivative extrema (FSDE) , event-driven features (EDF) , discrete derivatives and their peak values, hereafter referred to as discrete derivatives extrema (DDsE) , zero-crossing features (ZCF) , minimum delimitation (MD) , and the Haar-wavelet-based frequency-band separation (FBS HT ) method , . All these feature extraction/selection methods (including the proposed SFS approach) are followed by the same type of classifier for a fair comparison. For this purpose, the Gaussian Naive Bayes classifier , is used. This classifier only requires data statistics (mean and variance) with no need for manual setting of parameters. In this study, all the resulting spike sorters are evaluated with the same data set. The regular CA, the chance-level-independent CA (CA CLI ), and the feature space dimension for all the aforementioned feature extraction/selection methods reported earlier in the literature of brain implants are presented in Fig. . Spike sorting using the SFS method proposed in this work exhibits significantly higher CA (89.5%) and CA CLI (73.4%) than all the other methods. It is important to note that, even with as low as 2 dimensions for the feature space, the SFS-based spike sorter outperforms all other sorters from the standpoint of the achieved CA. This is translated to much less computational cost, which will lead to a significantly more power-/area-efficient hardware when it comes to on-implant physical implementation. To evaluate the efficacy of the proposed feature selection method in introducing a more appropriate subspace of features (i.e., the salient feature space), CA of SFS followed by a Bayes classifier is compared with that of Bayes classification on the entire feature space (i.e., with no feature selection). This is to have a fair judgment in the presence of all the factors contributing to the CA, including both within-class variability (noise content of the signal) and between-class variability (dissimilarity of class wave-shapes). Figure a presents the Bayes CA in the salient feature space versus that of the same Bayes classifier in the original signal space. Hereafter referred to as the “CA–CA plot”, the plot shown in Fig. a provides a sense of how the SFS method can improve the resilience of spike sorting against both within-class and between-class variabilities. The less-than-unity slope of the regression line (0.57) in the CA–CA plot of Fig. a indicates that (in addition to dimension reduction and consequently computational complexity reduction) the proposed SFS method makes the CA of spike sorting less sensitive to the aforementioned variabilities. Figure b compares the CA–CA plots of Bayes spike sorting when different approaches are taken for feature extraction/selection. According to this comparison, the proposed SFS method exhibits the most resilient CA against signal variabilities (the smallest slope) and at the same time the highest CA. To verify and evaluate the proposed spike sorting method, the sequence of forming the salient feature space followed by WD for neural spike classification is studied. The overall signal processing results of this method (SFS + WD) is compared with two other similar works that contain wave-shape classification. It should be noted that even though there are several works reporting on-implant spike sorting, the works of Karkare et al. and Yang et al. realize “complete” on-implant spike sorting (they go all the way to spike wave-shape classification as the very last step). In the former, an l 1 -norm distance-based method is used for spike classification, which is referred to as the l 1 -norm distance template matching ( l 1 -TM) for the classification of spike wave-shapes. As an alternative solution, the latter proposes the oblique decision tree (ODT) for on-implant spike sorting (Traditional classifiers such as Bayes have a high computational cost and therefore cannot be implemented on brain implants). Figure compares the performance of the proposed method for 1- and 2-dimensional salient feature spaces (1D SFS + WD and 2D SFS + WD) with the other two approaches ( l 1 -TM and FBS HT +ODT). In both 1D and 2D spaces, the SFS + WD method proves to be superior to the other techniques in terms of both CA and CA CLI (i.e., with or without the influence of spike chance level) with reasonably small calculation times. It was illustrated in Fig. b that the on-implant OSS comprises two main blocks: (I) The OSS internal parameters block, which consists of multiple register banks (holding the parameters received from the SSS), and (II) the OSS Engine, which mainly comprises simple digital comparators. The register banks in the OSS, which are shared among all the 512 channels , include a bank of a total of 5 k bits to contain salient feature indices, a bank of a total of 14 k bits to store the upper and lower bounds of the WDs for each and every salient feature, and a 3-k bit bank to hold the class identifiers associated with salient features. The OSS engine is realized using merely a 5-bit comparator and two 7-bit comparators, which are properly time-shared among all the channels for WD tasks. Compared with the works of Karkare et al. and Yang et al. , the memory space required to implement the on-implant OSS proposed in this work is 5 times and 68 times smaller, respectively. In total, the on-implant O.S.S in this work is implemented using 1869 transistors per channel and takes a chip area of 0.0066 [12pt]{minimal} $$}^{2}}{}$$ mm 2 ch. in a 130-nm CMOS process. This is while the former work and the latter work occupy 0.077 [12pt]{minimal} $$}^{2}}{}$$ mm 2 ch. and 0.023 [12pt]{minimal} $$}^{2}}{}$$ mm 2 ch. in 65 and 130 nm CMOS technologies, respectively. In this work, we propose an automated method for online spike sorting dedicated to high-density, high-speed brain implants. The proposed method needs to be simple, agile, and reconfigurable, and at the same time should be physically implemented in compliance with physical and electrical limitations of brain implants. Computational load of the existing spike sorting procedures usually entails technical challenges such as hardware complexity, power consumption, and computation speed. The technique we propose overcomes these challenges by shifting the computational load from the implant to the external side of the system where complexity of the algorithm and its hardware implementation is not as important. Traditionally, fully implantable neural recording systems comprise an implantable module and an external module communicating with each other via a wireless link. As depicted in Fig. , the miniaturized implantable module records intracortical neuronal activities on multiple channels using a penetrating microelectrode array. The implantable module is in charge of the recording of neuronal activities. The external module, in general, communicates with the implantable module through bidirectional wireless communication. It receives the recorded information from the implant, stores the recorded data, performs signal processing tasks, and possibly sends data/configuration information back to the implant. A conceptual block diagram of the system, on which the proposed method is realized is shown in Fig. a. In this scheme, the implantable module records neuronal activities, runs digital signal processing procedures, and finally performs online unsupervised spike sorting. The external module is in charge of the calculation of the parameters using which the on-implant online spike sorter (OSS) is configured and calibrated. To significantly reduce computational and hardware complexity on the implant, the proposed spike sorting method is divided into two phases: an offline initial training phase implemented on the external module, and the main online spike sorting phase realized on the implant. In the real time and with area- and power-efficient hardware on the implantable module, what remains on the implant is a compact, low-power, and agile OSS, which is configured using the results of the offline training phase received from the external module. The key value of the proposed spike sorting technique is in its potential to allow for a power- and area-efficient hardware implementation that operates in the real time on a high-density neural recording implant. Prior to the start of the operation of the on-implant OSS, first, the system telemeters neuronal activities (spikes) on all the channels to the external module through wireless connection. An unsupervised offline spike clustering block (based on the silhouette statistic – and k-means clustering algorithm ) on the external module then labels the spikes received from implantable module. As shown in Fig. b, a shadow spike sorter on the external module (which includes an identical model of the on-implant OSS) receives both the spikes and the associated labels, and is optimized to perform the proposed spike sorting algorithm. The OSS model parameters are then sent to the implantable module in order to configure the on-implant OSS. After configuration, the on-implant OSS will be able to perform spike sorting on the live stream of the neural signals being recorded. As will be discussed later in this paper, existing spike sorting techniques commonly use specific geometric features for spike wave-shape isolation. Although such features offer the advantage of straightforward mathematical formulation and rather simple hardware implementation, they do not necessarily guarantee maximal discrimination between spike classes. The proposed online spike sorting technique is based on finding a minimal set of geometric features, hereafter referred to as salient features, that maximize the discrimination between spike classes. Each and every spike class is discriminated from all other spike classes (multi-label classification ) using a subset of salient features in the salient feature space. As a measure for the extent of the overall separation of the class of interest (# i ) from all other classes, saliency of that class ( ς i ) is hereby defined in such a way that it expresses both its discrimination from all other classes and the extent of the homogeneity of the distribution of all other classes (with respect to class # i ). To quantify the saliency of class # i , the former is measured by the geometric mean of the associated distances, and the latter is quantified as the ratio of the geometric mean to the arithmetic mean of the same distances. It should be added that according to the definition presented in Beauchemin et al. and Woodhouse et al. , the signal space is considered homogeneous with respect to a certain class if that class is equally separated from each and every other class in the signal space. In the simplest scenario, where the feature space is one-dimensional, the saliency of class # i ( ς i ) is, therefore, formulated as 1 [12pt]{minimal} $${ }_{i}=_{j = 1(j i)}^{{N}_{c}}(.{d}_{ij}{).}^{{P}_{j}})}^{2}}{{ }_{j=1(j i)}^{{N}_{c}}{P}_{j} {d}_{ij}},$$ ς i = ∏ j = 1 ( j ≠ i ) N c d i j P j 2 ∑ j = 1 ( j ≠ i ) N c P j × d i j , where i is the index of the class of interest, d i j is the class # i and # j discrimination index (refer to “Methods”), P j is the relative probability of class # j , and N c is the total number of classes. In general, the concept of class saliency can be extended to a K -dimensional space. For the k th feature ( k = 1, 2, . . . , K ), σ i [ k ] is defined to express the saliency of class # i from all other classes (refer to “Methods”). From among all the features in the K -dimensional feature space, the most salient feature (MSF), [12pt]{minimal} $${k}_{i}^{1}$$ k i 1 , is introduced here as the feature that distinguishes class # i from all other classes with the highest possible class saliency. This is, indeed, the first member of the salient feature set, determined by spanning the entire ( K -dimensional) feature space. Index of the MSF for the class # i is, therefore, expressed as 2 [12pt]{minimal} $${k}_{i}^{1}=}\, {}}_{ \{1, 2, ..K\}}\{_{i}[ ]\},$$ k i 1 = arg max κ ∈ { 1 , 2 , . . K } ς i [ κ ] , Selected from among the remaining K -1 features in the feature space, the second MSF (2nd MSF), [12pt]{minimal} $${k}_{i}^{2}$$ k i 2 , is the most uncorrelated feature to the MSF ( [12pt]{minimal} $${k}_{i}^{1}$$ k i 1 ) that best isolates class # i from the rest of the signal space. The 2nd MSF is mathematically determined as 3 [12pt]{minimal} $${k}_{i}^{2}=}\, {}}_{ \{1,2,..K\}}\{_{i}[ ] (1-{ }_{i}( ,1))\},$$ k i 2 = arg max κ ∈ { 1 , 2 , . . K } ς i [ κ ] × 1 − ρ i ( κ , 1 ) , where ρ i ( κ , 1) indicates the correlation between the κ th feature in the feature space and the 1st member of the salient feature set, i.e., the MSF ( [12pt]{minimal} $${k}_{i}^{1}$$ k i 1 ). The term 1 − ρ i ( κ , 1) in Eq. is to ensure that the information redundancy in the salient feature space is eliminated or at least reduced. In general, the process of selecting L features of the highest saliency (i.e., forming an L -dimensional salient feature space for a given spike class, as formulated in “Methods”) is referred to as the salient feature selection (SFS) process. In a given spike sorting problem, the value of L is determined by the user as a result of a tradeoff between hardware cost and the achieved classification accuracy (CA). Figure a illustrates a spike classification problem with three isolated units, mean values of which are shown using red, green, and blue solid lines ( μ 1 , μ 2 , and μ 3 , respectively). Here, each spike sample is taken as a feature. Hence, assuming that a neural spike is expressed using K samples, the main feature space consists of K feature dimensions. The SFS method is now used for the generation of a salient feature set, by selecting the samples that best discriminate each spike from other spikes. The MSFs (samples) shown with circles on each spike class are indeed the ones that provide the highest saliency. Figure b–d present the details of the SFS process in the case of this neural spike classification problem, in which the horizontal axis is the “Feature Index ( k )”. The geometric means and homogeneities of d i j ’s associated with the three spike classes are shown, respectively in Fig. b, c based on which class saliencies are calculated and plotted in Fig. d. According to the subplots shown, saliency of a unit with respect to the others ( ς i ) peaks when the product of the geometric mean and the homogeneity associated with that unit is reasonably large. To evaluate and validate the success of the proposed concept of saliency in order to form an efficient feature set (i.e., the salient feature set), we use the Bayes classifier to complete the spike classification process. We are to show that there is a strong correlation between the saliency of the features used for classification and the CA achieved. The scatter plot in Fig. a presents the saliencies (calculated based on Eq.  in “Methods”) versus the Bayes CA for the three classes shown in Fig. a. It can be obviously seen in this figure that the chance level of each class (illustrated by dotted lines) contributes to the associated accuracy of classification. However, this contribution can be somehow misleading when evaluating and comparing classification accuracies for classes of different chance levels. To have a fair comparison, we therefore eliminate the influence of the chance level from the CA. Hence, as a modified measure, the chance-level-independent CA (CA CLI ) is proposed as 4 [12pt]{minimal} $${}_{}=-}{1-\,},$$ CA CLI = CA − Chance 1 − Chance , in which CA is the classification accuracy with its conventional definition, and Chance is the chance level associated with each class. The same comparison after the elimination of the chance level from the classification accuracies of the clusters is presented in Fig. b. The distribution of the data points in this plot is an indication of a meaningful statistical correlation (correlation coefficient ≈0.8) between the log-saliency of features and the CA CLI they result. Therefore, it can be concluded that the saliency metric proposed in this work is an efficient criterion for the selection of a subset of features for successful classification. For online on-implant spike sorting in this work, we use a window discrimination (WD) method in the salient feature space. The WD method benefits from efficient hardware implementation, which is of crucial importance in the design of brain-implantable microsystems. In order to classify each and every unit, one discrimination window is assigned to each class in the associated salient feature space. Specifications of the four borders of each window are determined in the offline phase, and are subsequently stored in the on-implant spike sorter for online spike classification. Number of dimensions in the salient feature space is an important aspect of the feature selection method that needs to be decided upon. In this work, a salient feature space with two dimensions leads to satisfactory isolation of units with the accuracy of 87.6% or higher. For the classification problem under study with three classes of unit activities shown in Fig. a, a two-dimensional (2D) salient feature space is defined for each class of unit activities. In this case, multi-label discrimination windows in the associated salient feature spaces are shown in the scatter plots of Fig. . In these plots ( s[13] and s[19] ) are the salient features used for the classification of unit #1, and ( s[9] and s[15] ) and ( s[12] and s[22] ) are the salient features used for the classification of unit #2 and unit #3, respectively. In these scatter plots, the coordinates of the (upper bound and lower bound) for the discrimination windows associated with units #1, #2, and #3 are {(−28, 70), (−3, 103)}, {(−11, 147), (−171, 185)}, and {(54, 222), (−133, 4)}, respectively. For all the tests presented in this section, the open-access data set of prerecorded spike wave-shapes is used to generate the data required for both training and testing. This data set were recorded by a 10 × 10 Utah array from populations of neurons in primary visual cortex (V1) of macaque monkeys ( Macaca fascicularis ) in response to natural images. From this data set, we generated a library of ~15,000 different spike classes. Each spike class consists of hundreds of spikes with signal-to-noise ratios ranging from ~0.3 to ~22 (with the average value of ~4.5). Sampling rate of the recordings is 30 k sample/s, with the resolution of 8 bits. All the spikes extracted for classification are 48 samples long (1.6 ms). For each trial, a random selection of ~1450 spike classes (units) are chosen. From this “trial library”, two, three, or four units are used to train and test each channel. For each and every channel, the spikes under each unit are used for training and testing with a breakdown of 50–50%. To evaluate the efficacy of the idea of salient features and the SFS approach proposed, the results achieved in this work are compared with the other major feature extraction/selection methods already appeared in the literature. On-implant spike sorting methods normally use specific features with straightforward mathematical descriptions to classify spike wave-shapes (referred to as static methods in this work). The static techniques used for comparison include peak-to-peak amplitude of the spike and min-max of its derivative, hereafter referred to as spike and derivative extrema (SDE) , first and second derivative extrema (FSDE) , event-driven features (EDF) , discrete derivatives and their peak values, hereafter referred to as discrete derivatives extrema (DDsE) , zero-crossing features (ZCF) , minimum delimitation (MD) , and the Haar-wavelet-based frequency-band separation (FBS HT ) method , . All these feature extraction/selection methods (including the proposed SFS approach) are followed by the same type of classifier for a fair comparison. For this purpose, the Gaussian Naive Bayes classifier , is used. This classifier only requires data statistics (mean and variance) with no need for manual setting of parameters. In this study, all the resulting spike sorters are evaluated with the same data set. The regular CA, the chance-level-independent CA (CA CLI ), and the feature space dimension for all the aforementioned feature extraction/selection methods reported earlier in the literature of brain implants are presented in Fig. . Spike sorting using the SFS method proposed in this work exhibits significantly higher CA (89.5%) and CA CLI (73.4%) than all the other methods. It is important to note that, even with as low as 2 dimensions for the feature space, the SFS-based spike sorter outperforms all other sorters from the standpoint of the achieved CA. This is translated to much less computational cost, which will lead to a significantly more power-/area-efficient hardware when it comes to on-implant physical implementation. To evaluate the efficacy of the proposed feature selection method in introducing a more appropriate subspace of features (i.e., the salient feature space), CA of SFS followed by a Bayes classifier is compared with that of Bayes classification on the entire feature space (i.e., with no feature selection). This is to have a fair judgment in the presence of all the factors contributing to the CA, including both within-class variability (noise content of the signal) and between-class variability (dissimilarity of class wave-shapes). Figure a presents the Bayes CA in the salient feature space versus that of the same Bayes classifier in the original signal space. Hereafter referred to as the “CA–CA plot”, the plot shown in Fig. a provides a sense of how the SFS method can improve the resilience of spike sorting against both within-class and between-class variabilities. The less-than-unity slope of the regression line (0.57) in the CA–CA plot of Fig. a indicates that (in addition to dimension reduction and consequently computational complexity reduction) the proposed SFS method makes the CA of spike sorting less sensitive to the aforementioned variabilities. Figure b compares the CA–CA plots of Bayes spike sorting when different approaches are taken for feature extraction/selection. According to this comparison, the proposed SFS method exhibits the most resilient CA against signal variabilities (the smallest slope) and at the same time the highest CA. To verify and evaluate the proposed spike sorting method, the sequence of forming the salient feature space followed by WD for neural spike classification is studied. The overall signal processing results of this method (SFS + WD) is compared with two other similar works that contain wave-shape classification. It should be noted that even though there are several works reporting on-implant spike sorting, the works of Karkare et al. and Yang et al. realize “complete” on-implant spike sorting (they go all the way to spike wave-shape classification as the very last step). In the former, an l 1 -norm distance-based method is used for spike classification, which is referred to as the l 1 -norm distance template matching ( l 1 -TM) for the classification of spike wave-shapes. As an alternative solution, the latter proposes the oblique decision tree (ODT) for on-implant spike sorting (Traditional classifiers such as Bayes have a high computational cost and therefore cannot be implemented on brain implants). Figure compares the performance of the proposed method for 1- and 2-dimensional salient feature spaces (1D SFS + WD and 2D SFS + WD) with the other two approaches ( l 1 -TM and FBS HT +ODT). In both 1D and 2D spaces, the SFS + WD method proves to be superior to the other techniques in terms of both CA and CA CLI (i.e., with or without the influence of spike chance level) with reasonably small calculation times. It was illustrated in Fig. b that the on-implant OSS comprises two main blocks: (I) The OSS internal parameters block, which consists of multiple register banks (holding the parameters received from the SSS), and (II) the OSS Engine, which mainly comprises simple digital comparators. The register banks in the OSS, which are shared among all the 512 channels , include a bank of a total of 5 k bits to contain salient feature indices, a bank of a total of 14 k bits to store the upper and lower bounds of the WDs for each and every salient feature, and a 3-k bit bank to hold the class identifiers associated with salient features. The OSS engine is realized using merely a 5-bit comparator and two 7-bit comparators, which are properly time-shared among all the channels for WD tasks. Compared with the works of Karkare et al. and Yang et al. , the memory space required to implement the on-implant OSS proposed in this work is 5 times and 68 times smaller, respectively. In total, the on-implant O.S.S in this work is implemented using 1869 transistors per channel and takes a chip area of 0.0066 [12pt]{minimal} $$}^{2}}{}$$ mm 2 ch. in a 130-nm CMOS process. This is while the former work and the latter work occupy 0.077 [12pt]{minimal} $$}^{2}}{}$$ mm 2 ch. and 0.023 [12pt]{minimal} $$}^{2}}{}$$ mm 2 ch. in 65 and 130 nm CMOS technologies, respectively. On-implant spike sorting methods normally use specific features with straightforward mathematical description to classify spike wave-shapes. Examples of such features are minima and maxima of the spike amplitude and their timing , , maximum slopes (either positive or negative) – , , , and zero-crossing times . Even though those features correspond to critical points and important information of the spikes, but they are not necessarily the best possible features for spike wave-shape descrimination. In this paper, we introduced a novel framework for on-implant spike sorting. The goal is to improve the CA and also reduce the hardware cost. The proposed framework comprises the SFS method and WD for spike classification. The main aim of the SFS method is to efficiently reduce the dimension of data representation. The SFS method searches for the features that best distinguish each and every spike class from the rest of the spike classes in the signal space. It is guaranteed by definition that such features (referred to as “salient features”) result in spike sorting in such a way that the geometric mean of between-class distances is maximized in the most homogeneous way. It is shown in this work that a set of such features can result in meaningfully higher classification accuracies compared with other spike sorting approaches existing in the literature (~2 × reduction of classification error). The WD technique is used for multi-label classification of spike wave-shapes in the salient feature space. Taking advantage of both SFS and WD in a multi-label structure, online spike sorting is realized with higher CA at a significantly lower hardware cost (~5 × reduction in the required memory), compared with other similar works reported. In neural prosthetic applications, when activities of neural populations are monitored for long periods of time (hours, days, or weeks), although the number of units remains almost constant, the units might appear and disappear – . Such changes in the neural populations under study cause failure or at least degradation in the performance of the prosthesis. To handle and resolve such problems, the system in this work is designed to periodically recalculate the SFS and WD parameters (through the offline procedure already explained) and reconfigure the on-implant OSS accordingly. This results in maintaining the classification performance in the presence of such signal variations. Taking into consideration physical and electrical limitations such as chip size and power consumption, a hardware prototype realizing the proposed spike sorting method is designed to be able to classify a total of 512 spike classes on all the 512 channels. One of the major practical requirements for the proposed spike sorting method is the physical size of its hardware implementation. To be mounted on the backside of a 100-channel Utah electrode array (with the area of ~4 × 4 mm 2 , ), the silicon chip designed to realize the proposed method will therefore need to be smaller than ~16 mm 2 . Physical layout of the chip implementation of the proposed 512-channel spike sorter in a 130-nm standard CMOS technology occupies a silicon area of 3.36 mm 2 (2.124 mm × 1.58 mm). Another physical concern in the development of a brain implant is the temperature increase it causes for the surrounding tissues. Temperature increase of more than 1–2 °C may damage the brain tissue – , and therefore introduces a strict limitation on the power dissipated by the active circuitry on a brain implant. According to ref. , power density of a brain implant cannot exceed the upper limit of ~1.33 [12pt]{minimal} $$}{}^{2}}$$ mW mm 2 in order to keep the surrounding living tissues safe against temperature rise. Operated at a supply voltage of 1.2 V, the chip implementing the proposed spike sorting method dissipates a total power of 905.9 μW, which gives a safe power density of ~0.27 [12pt]{minimal} $$}{}^{2}}$$ mW mm 2 . As reported in Fig. , offline training time for the proposed 512-channel spike sorter (which is indeed the time required for the (re)configuration of the OSS) is ~5 min (0.64 s per channel). Even though this is somewhat larger than the configuration time reported in ref. , it is still significantly smaller than the much longer time (~30 min) that advanced brain machine interfaces typically require for (re)calibration (see refs. , ). Exponential class discrimination index As a measure for the normalized distance between the spike class under study (# i ) and each one of the other spike classes (# j ), it is proposed to use the exponential class discrimination index 5 [12pt]{minimal} $${d}_{ij}={{}}^{_{i}-{ }_{j}| }{_{i}{ }_{i}^{2}+{P}_{j}{ }_{j}^{2}}}},$$ d i j = e ∣ μ i − μ j ∣ P i σ i 2 + P j σ j 2 , in which ( μ i , μ j ), ( σ i , σ j ), and ( P i , P j ) are the mean values, standard deviations, and relative probabilities of occurrences for spike classes # i and # j , respectively. Figure illustrates this distance measure in the case of three spike classes in a two-dimentional feature space. SFS in a K -dimensional feature space In order to come up with an optimum spike sorting solution, for each class a subset of L MSFs (out of the total of K features) is selected. This subset is referred to as the salient feature set for that class. To form the L -dimensional salient feature set for class # i , first, saliency of this class is calculated using each and every feature ( ς i [ k ], 1 ≤ k ≤ K ) as 6 [12pt]{minimal} $${ }_{i}[k]=_{j = 1(j i)}^{{N}_{c}}(.{d}_{ij}[k]{).}^{{P}_{j}})}^{2}}{{ }_{j=1(j i)}^{{N}_{c}}{P}_{j} {d}_{ij}[k]},$$ ς i [ k ] = ∏ j = 1 ( j ≠ i ) N c d i j [ k ] P j 2 ∑ j = 1 ( j ≠ i ) N c P j × d i j [ k ] , in which d i j [ k ] (1 ≤ k ≤ K ) is the class discrimination index when classes are discriminated according to feature # k . Using the class saliency measure for each feature, the l th member of the salient feature set for class # i is determined as 7 [12pt]{minimal} $${k}_{i}^{l}=}\, {}}_{ \{1,2,..K\}}\{{ }_{i}[ ] _{h=1}^{l-1}(1-{ }_{i}( ,h))\},$$ k i l = arg max κ ∈ { 1 , 2 , . . K } ς i [ κ ] × ∏ h = 1 l − 1 1 − ρ i ( κ , h ) , where ρ i ( κ , h ) indicates the correlation between the κ th member of the main feature space and the h th member of the salient feature set (i.e., [12pt]{minimal} $${k}_{i}^{h}$$ k i h ). Reporting summary Further information on research design is available in the linked to this article. As a measure for the normalized distance between the spike class under study (# i ) and each one of the other spike classes (# j ), it is proposed to use the exponential class discrimination index 5 [12pt]{minimal} $${d}_{ij}={{}}^{_{i}-{ }_{j}| }{_{i}{ }_{i}^{2}+{P}_{j}{ }_{j}^{2}}}},$$ d i j = e ∣ μ i − μ j ∣ P i σ i 2 + P j σ j 2 , in which ( μ i , μ j ), ( σ i , σ j ), and ( P i , P j ) are the mean values, standard deviations, and relative probabilities of occurrences for spike classes # i and # j , respectively. Figure illustrates this distance measure in the case of three spike classes in a two-dimentional feature space. K -dimensional feature space In order to come up with an optimum spike sorting solution, for each class a subset of L MSFs (out of the total of K features) is selected. This subset is referred to as the salient feature set for that class. To form the L -dimensional salient feature set for class # i , first, saliency of this class is calculated using each and every feature ( ς i [ k ], 1 ≤ k ≤ K ) as 6 [12pt]{minimal} $${ }_{i}[k]=_{j = 1(j i)}^{{N}_{c}}(.{d}_{ij}[k]{).}^{{P}_{j}})}^{2}}{{ }_{j=1(j i)}^{{N}_{c}}{P}_{j} {d}_{ij}[k]},$$ ς i [ k ] = ∏ j = 1 ( j ≠ i ) N c d i j [ k ] P j 2 ∑ j = 1 ( j ≠ i ) N c P j × d i j [ k ] , in which d i j [ k ] (1 ≤ k ≤ K ) is the class discrimination index when classes are discriminated according to feature # k . Using the class saliency measure for each feature, the l th member of the salient feature set for class # i is determined as 7 [12pt]{minimal} $${k}_{i}^{l}=}\, {}}_{ \{1,2,..K\}}\{{ }_{i}[ ] _{h=1}^{l-1}(1-{ }_{i}( ,h))\},$$ k i l = arg max κ ∈ { 1 , 2 , . . K } ς i [ κ ] × ∏ h = 1 l − 1 1 − ρ i ( κ , h ) , where ρ i ( κ , h ) indicates the correlation between the κ th member of the main feature space and the h th member of the salient feature set (i.e., [12pt]{minimal} $${k}_{i}^{h}$$ k i h ). Further information on research design is available in the linked to this article. Peer Review File Reporting Summary
Editorial to the Special Issue: “Synthesis of Organic Ligands and Their Metal Complexes in Medicinal Chemistry”
70ac1750-6194-400e-8705-266a90def275
9182312
Pharmacology[mh]
Pharmacogenomic Profiling of Cisplatin-Resistant and -Sensitive Human Osteosarcoma Cell Lines by Multimodal Targeted Next Generation Sequencing
cc0bc570-c108-4512-abd1-ab6393ade396
9570120
Pharmacology[mh]
High-grade osteosarcoma (HGOS), the most common malignant tumor of bone, is treated by surgery and systemic neo-adjuvant multidrug chemotherapy . Cisplatin (CDDP), together with high-dose methotrexate and doxorubicin, is invariably included in standard chemotherapy for this tumor . Pharmacogenetic studies have revealed several single nucleotide polymorphisms (SNPs) of genes belonging to either DNA repair, drug transport, folate metabolism, and detoxification pathways to be associated with therapy-related parameters in HGOS, as survival and drug response, or development of drug-associated toxicity . The general goal of these studies was the identification of genomic variations associated with drug response or adverse toxicities, which may provide useful information to improve treatment efficacy and simultaneously reduce the risk of chemotherapy-related toxicities . In the last decade, pharmacogenomic approaches have been increasingly applied to the study of HGOS providing a series of interesting insights related to genetic polymorphisms, which may be causally related to drug resistance or susceptibility to develop treatment-related adverse toxicities . However, all these indications must be further confirmed because the polymorphic gene status was revealed almost only in patients’ normal (germline) cells at the DNA level without providing information on how these changes were maintained at the RNA level, and influenced RNA and protein expression in tumor cells. The aim of this study was to explore the genotype status of 28 SNPs in 14 genes related to processes involved in DNA repair, CDDP transport and detoxification, or involved in CDDP-related toxicity in a panel of 6 CDDP-resistant and 12 drug-sensitive human HGOS cell lines . In particular, we focused our study on both pharmacogenetic (germline) and pharmacogenomic (tumor-associated, somatic) markers, which had been indicated to influence treatment response and susceptibility to CDDP-related ototoxicity in HGOS patients, thus appearing as promising candidates for a translation to clinical practice. This selection was performed by taking into consideration the body of evidence reported so far, which has also been recently reviewed . This analysis was performed by using an innovative multimodal targeted next generation sequencing (mmNGS) approach that allowed for the contemporary study of the selected SNPs on both DNA- and RNA-derived libraries. Data obtained by mmNGS on DNA-derived libraries were validated by TaqMan genotyping. RNA expression level of the 14 genes in CDDP-resistant variants compared to their parental cell lines was also determined. Heatmap analysis was performed, including all CDDP-resistant and drug-sensitive cell lines. 2.1. Validation of Custom Multimodal NGS Panel Data obtained for the 28 SNPs on DNA-derived libraries by the custom mmNGS approach were validated by TaqMan genotyping in 24/28 SNPs. shows for each cell line the genotype status of all 28 SNPs, which were identified to be either heterozygous or homozygous by sequencing compared to the reference sequence . Variants with an allele frequency greater than 3% were considered reliable. Those SNPs that were homozygous wild-type for the reference allele were not reported in . By comparing the data obtained from sequencing and genotyping, we found that for 11/18 (61%) cell lines data obtained by both techniques matched in 100% of the SNPs, whereas in 4/18 (22%) cell lines the match ranged from 90 to 93%, and was below 90% in 3/18 (17%) cell lines. The fact that 39% of the cell lines did not show a complete match could be explained by the presence of different subpopulations within the same cell line. Interestingly, for five SNPs, ABCC2 rs17222723, ACYP2 rs1872328, TPMT rs12201199, rs1142345, and rs1800460 the homozygous wild-type genotype was identified in all 18 cell lines. 2.2. DNA SNP Evaluation in Relation to Level of CDDP Resistance 2.2.1. Comparison between U-2OS CDDP-Resistant Variants to Parental U-2OS Cell Line The comparison of polymorphisms identified in the group of U-2OS CDDP-resistant variants in comparison with their parental cells, identified two polymorphisms of the ERCC2 gene, (rs13181 and rs1799793), which exhibited a genotype change in relation to the acquisition of CDDP resistance . In the CDDP-sensitive, parental cell line and in the two variants with the lower level of CDDP resistance (U-2OS/CDDP300 and U-2OS/CDDP1µg), the genotype of ERCC2 rs13181 was heterozygous variant (GT), while in the variant with the highest resistance level (U-2OS/CDDP4µg) the genotype of the polymorphism shifted to homozygous wild-type (TT). shows the graphical representation of the mmNGS data obtained by the DNA variant calling identifier tool of the CLC Genomics Workbench (GWB) analysis. The data obtained for these SNPs by TaqMan genotyping and mmNGS were concordant for all cell lines except for ERCC2 rs13181 in U-2OS/CDDP1µg variant for which mmNGS reported GT but TaqMan genotyping a TT genotype. This apparent discordance may be due to the different sensitivity of the techniques and the presence of subpopulations with TT and GT genotypes. However, these data indicate that in these cells the transition toward a TT genotype is associated with development of CDDP resistance. For ERCC2 rs1799793, the sensitive cell line and the two U-2OS/CDDP300 and U-2OS/CDDP1µg resistant cell lines showed a heterozygous variant genotype CT, which became homozygous (CC) in the variant with the highest resistance level ( and ). As shown in and , both SNPs of ERCC2 were non-synonymous and caused amino acid changes. The SNP rs13181 caused the substitution of Lys by Gln and the rs1799793 the substitution of Asp with Asn. 2.2.2. Comparison between Saos-2 CDDP-Resistant Variants to Parental Saos-2 Cell Line The comparison of DNA variant calling data between Saos-2 CDDP-resistant variants and their parental Saos-2 CDDP-sensitive cell line identified genotype changes of ERCC2 rs13181 and ERCC1 rs11615 . These genotype changes were confirmed by both TaqMan genotyping and mmNGS. For ERCC2 rs13181, the genotype of the detected polymorphism was heterozygous variant GT in the sensitive and the two Saos-2 resistant variants with lower resistance levels, while in the Saos-2/CDDP6µg variant the genotype changed to homozygous variant GG . The same situation occurred for ERCC1 rs11615, which was heterozygous variant GA in the sensitive cell line and the two resistant variants with lower resistance levels, whereas homozygous variant GG in Saos-2/CDDP6µg . Different to the ERCC2 rs13181 variant, which caused an amino acid change from Lys to Gln, no amino acid changes were revealed by the CLC GWB analysis for the synonymous ERCC1 rs11615 variant . 2.3. RNA SNP Evaluation in Relation to Level of CDDP Resistance 2.3.1. Comparison between U-2OS Cell Line and U-2OS CDDP-Resistant Variants All genotype variations identified at the DNA level and described above were also identified on the RNA level, indicating that these changes had been selected and maintained during development of CDDP resistance. Differently, the GSTP1 rs1695 SNP changed in the RNA-derived libraries of U-2OS cell line and U-2OS/CDDP1µg variant compared to the DNA-derived libraries . The genotype of the GSTP1 rs1695 detected on DNA remained AG in the sensitive and in the three resistant cell lines. At the RNA level, the genotype of GSTP1 rs1695 was homozygous wild-type AA in U-2OS and heterozygous variant AG in U-2OS/CDDP300 and U-2OS/CDDP4µg variants, while in the U-2OS/CDDPP1µg variant, a multi nucleotide variant (MNV) GAT, was detected. Interestingly, the amino acid change Ile105Val caused by the GSTP1 rs1695 variant allele was identified by the CLC GWB in all three CDDP-resistant U-2OS variants . 2.3.2. Comparison between Saos-2 Cell Line and Saos-2 CDDP-Resistant Variants All genotype changes identified at the DNA level were also identified on RNA except for the GSTP1 rs1695 SNP. As with U-2OS cell lines, the rs1695 genotype was AG at DNA level whereas homozygous AA at RNA level in the Saos-2 parental cell line . No difference was found at DNA and RNA level for all CDDP-resistant variants . Accordingly, the amino acid change Ile105Val caused by the GSTP1 rs1695 variant allele was identified by the CLC GWB in all three CDDP-resistant Saos-2 variants with the AG genotype in the RNA-derived libraries . 2.4. RNA Expression Analysis Targeted RNAseq was performed for the 14 genes related to either CDDP drug response or toxicity reported after CDDP therapy. The fold-change of transcripts per million (TPM), which estimates the fold-change in RNA expression, for each CDDP-resistant variant compared to its drug-sensitive parental cell line is graphically shown in . In U2OS-derived CDDP-resistant variants, six genes, ABCC2, ABCC3, ACYP2, COMT, ERCC2 , and XRCC3 emerged to be increased more than 2-fold compared to the parental U-2OS cell line, whereas four genes, ATM, ATR, TP53 , and XPA were downregulated in CDDP-resistant variants. Considering all three CDDP-resistant variants together, the differential gene expression tool of the CLC GWB identified the downregulation of ATM, ATR , and TP53 as significant with a Bonferroni corrected p -value < 0.05. In Saos-2-derived CDDP-resistant variants, six genes were increased more than 2-fold: ABCB1, ABCC2 and XRCC3 in all three variants, whereas ACYP2, COMT and ERCC2 only in Saos-2/CDDP300, the variant with the lowest resistance level. CDDP-resistant variants also presented downregulation of ATM, ATR, GSTP1, TPMT, and XPA genes. Evaluating all three CDDP-resistant variants together, a significant difference after Bonferroni correction with a p -value < 0.05 was identified for upregulation of ABCB1 and downregulation of TPMT and XPA . Similarities between CDDP-resistant and CDDP-sensitive cell lines were assessed by using the heatmap tool of the CLC GWB, including all 14 genes . Two main clusters were revealed. One consisted of two clusters formed by all six CDDP-resistant variants clearly separated from their two parental cell lines. The 10 drug-sensitive cell lines formed the second main cluster, which was mostly separated from that of CDDP-resistant variants. As also shown in , the group of 14 genes resulted to be divided in 6 clusters, with genes belonging to the same family mostly grouped together. Data obtained for the 28 SNPs on DNA-derived libraries by the custom mmNGS approach were validated by TaqMan genotyping in 24/28 SNPs. shows for each cell line the genotype status of all 28 SNPs, which were identified to be either heterozygous or homozygous by sequencing compared to the reference sequence . Variants with an allele frequency greater than 3% were considered reliable. Those SNPs that were homozygous wild-type for the reference allele were not reported in . By comparing the data obtained from sequencing and genotyping, we found that for 11/18 (61%) cell lines data obtained by both techniques matched in 100% of the SNPs, whereas in 4/18 (22%) cell lines the match ranged from 90 to 93%, and was below 90% in 3/18 (17%) cell lines. The fact that 39% of the cell lines did not show a complete match could be explained by the presence of different subpopulations within the same cell line. Interestingly, for five SNPs, ABCC2 rs17222723, ACYP2 rs1872328, TPMT rs12201199, rs1142345, and rs1800460 the homozygous wild-type genotype was identified in all 18 cell lines. 2.2.1. Comparison between U-2OS CDDP-Resistant Variants to Parental U-2OS Cell Line The comparison of polymorphisms identified in the group of U-2OS CDDP-resistant variants in comparison with their parental cells, identified two polymorphisms of the ERCC2 gene, (rs13181 and rs1799793), which exhibited a genotype change in relation to the acquisition of CDDP resistance . In the CDDP-sensitive, parental cell line and in the two variants with the lower level of CDDP resistance (U-2OS/CDDP300 and U-2OS/CDDP1µg), the genotype of ERCC2 rs13181 was heterozygous variant (GT), while in the variant with the highest resistance level (U-2OS/CDDP4µg) the genotype of the polymorphism shifted to homozygous wild-type (TT). shows the graphical representation of the mmNGS data obtained by the DNA variant calling identifier tool of the CLC Genomics Workbench (GWB) analysis. The data obtained for these SNPs by TaqMan genotyping and mmNGS were concordant for all cell lines except for ERCC2 rs13181 in U-2OS/CDDP1µg variant for which mmNGS reported GT but TaqMan genotyping a TT genotype. This apparent discordance may be due to the different sensitivity of the techniques and the presence of subpopulations with TT and GT genotypes. However, these data indicate that in these cells the transition toward a TT genotype is associated with development of CDDP resistance. For ERCC2 rs1799793, the sensitive cell line and the two U-2OS/CDDP300 and U-2OS/CDDP1µg resistant cell lines showed a heterozygous variant genotype CT, which became homozygous (CC) in the variant with the highest resistance level ( and ). As shown in and , both SNPs of ERCC2 were non-synonymous and caused amino acid changes. The SNP rs13181 caused the substitution of Lys by Gln and the rs1799793 the substitution of Asp with Asn. 2.2.2. Comparison between Saos-2 CDDP-Resistant Variants to Parental Saos-2 Cell Line The comparison of DNA variant calling data between Saos-2 CDDP-resistant variants and their parental Saos-2 CDDP-sensitive cell line identified genotype changes of ERCC2 rs13181 and ERCC1 rs11615 . These genotype changes were confirmed by both TaqMan genotyping and mmNGS. For ERCC2 rs13181, the genotype of the detected polymorphism was heterozygous variant GT in the sensitive and the two Saos-2 resistant variants with lower resistance levels, while in the Saos-2/CDDP6µg variant the genotype changed to homozygous variant GG . The same situation occurred for ERCC1 rs11615, which was heterozygous variant GA in the sensitive cell line and the two resistant variants with lower resistance levels, whereas homozygous variant GG in Saos-2/CDDP6µg . Different to the ERCC2 rs13181 variant, which caused an amino acid change from Lys to Gln, no amino acid changes were revealed by the CLC GWB analysis for the synonymous ERCC1 rs11615 variant . The comparison of polymorphisms identified in the group of U-2OS CDDP-resistant variants in comparison with their parental cells, identified two polymorphisms of the ERCC2 gene, (rs13181 and rs1799793), which exhibited a genotype change in relation to the acquisition of CDDP resistance . In the CDDP-sensitive, parental cell line and in the two variants with the lower level of CDDP resistance (U-2OS/CDDP300 and U-2OS/CDDP1µg), the genotype of ERCC2 rs13181 was heterozygous variant (GT), while in the variant with the highest resistance level (U-2OS/CDDP4µg) the genotype of the polymorphism shifted to homozygous wild-type (TT). shows the graphical representation of the mmNGS data obtained by the DNA variant calling identifier tool of the CLC Genomics Workbench (GWB) analysis. The data obtained for these SNPs by TaqMan genotyping and mmNGS were concordant for all cell lines except for ERCC2 rs13181 in U-2OS/CDDP1µg variant for which mmNGS reported GT but TaqMan genotyping a TT genotype. This apparent discordance may be due to the different sensitivity of the techniques and the presence of subpopulations with TT and GT genotypes. However, these data indicate that in these cells the transition toward a TT genotype is associated with development of CDDP resistance. For ERCC2 rs1799793, the sensitive cell line and the two U-2OS/CDDP300 and U-2OS/CDDP1µg resistant cell lines showed a heterozygous variant genotype CT, which became homozygous (CC) in the variant with the highest resistance level ( and ). As shown in and , both SNPs of ERCC2 were non-synonymous and caused amino acid changes. The SNP rs13181 caused the substitution of Lys by Gln and the rs1799793 the substitution of Asp with Asn. The comparison of DNA variant calling data between Saos-2 CDDP-resistant variants and their parental Saos-2 CDDP-sensitive cell line identified genotype changes of ERCC2 rs13181 and ERCC1 rs11615 . These genotype changes were confirmed by both TaqMan genotyping and mmNGS. For ERCC2 rs13181, the genotype of the detected polymorphism was heterozygous variant GT in the sensitive and the two Saos-2 resistant variants with lower resistance levels, while in the Saos-2/CDDP6µg variant the genotype changed to homozygous variant GG . The same situation occurred for ERCC1 rs11615, which was heterozygous variant GA in the sensitive cell line and the two resistant variants with lower resistance levels, whereas homozygous variant GG in Saos-2/CDDP6µg . Different to the ERCC2 rs13181 variant, which caused an amino acid change from Lys to Gln, no amino acid changes were revealed by the CLC GWB analysis for the synonymous ERCC1 rs11615 variant . 2.3.1. Comparison between U-2OS Cell Line and U-2OS CDDP-Resistant Variants All genotype variations identified at the DNA level and described above were also identified on the RNA level, indicating that these changes had been selected and maintained during development of CDDP resistance. Differently, the GSTP1 rs1695 SNP changed in the RNA-derived libraries of U-2OS cell line and U-2OS/CDDP1µg variant compared to the DNA-derived libraries . The genotype of the GSTP1 rs1695 detected on DNA remained AG in the sensitive and in the three resistant cell lines. At the RNA level, the genotype of GSTP1 rs1695 was homozygous wild-type AA in U-2OS and heterozygous variant AG in U-2OS/CDDP300 and U-2OS/CDDP4µg variants, while in the U-2OS/CDDPP1µg variant, a multi nucleotide variant (MNV) GAT, was detected. Interestingly, the amino acid change Ile105Val caused by the GSTP1 rs1695 variant allele was identified by the CLC GWB in all three CDDP-resistant U-2OS variants . 2.3.2. Comparison between Saos-2 Cell Line and Saos-2 CDDP-Resistant Variants All genotype changes identified at the DNA level were also identified on RNA except for the GSTP1 rs1695 SNP. As with U-2OS cell lines, the rs1695 genotype was AG at DNA level whereas homozygous AA at RNA level in the Saos-2 parental cell line . No difference was found at DNA and RNA level for all CDDP-resistant variants . Accordingly, the amino acid change Ile105Val caused by the GSTP1 rs1695 variant allele was identified by the CLC GWB in all three CDDP-resistant Saos-2 variants with the AG genotype in the RNA-derived libraries . All genotype variations identified at the DNA level and described above were also identified on the RNA level, indicating that these changes had been selected and maintained during development of CDDP resistance. Differently, the GSTP1 rs1695 SNP changed in the RNA-derived libraries of U-2OS cell line and U-2OS/CDDP1µg variant compared to the DNA-derived libraries . The genotype of the GSTP1 rs1695 detected on DNA remained AG in the sensitive and in the three resistant cell lines. At the RNA level, the genotype of GSTP1 rs1695 was homozygous wild-type AA in U-2OS and heterozygous variant AG in U-2OS/CDDP300 and U-2OS/CDDP4µg variants, while in the U-2OS/CDDPP1µg variant, a multi nucleotide variant (MNV) GAT, was detected. Interestingly, the amino acid change Ile105Val caused by the GSTP1 rs1695 variant allele was identified by the CLC GWB in all three CDDP-resistant U-2OS variants . All genotype changes identified at the DNA level were also identified on RNA except for the GSTP1 rs1695 SNP. As with U-2OS cell lines, the rs1695 genotype was AG at DNA level whereas homozygous AA at RNA level in the Saos-2 parental cell line . No difference was found at DNA and RNA level for all CDDP-resistant variants . Accordingly, the amino acid change Ile105Val caused by the GSTP1 rs1695 variant allele was identified by the CLC GWB in all three CDDP-resistant Saos-2 variants with the AG genotype in the RNA-derived libraries . Targeted RNAseq was performed for the 14 genes related to either CDDP drug response or toxicity reported after CDDP therapy. The fold-change of transcripts per million (TPM), which estimates the fold-change in RNA expression, for each CDDP-resistant variant compared to its drug-sensitive parental cell line is graphically shown in . In U2OS-derived CDDP-resistant variants, six genes, ABCC2, ABCC3, ACYP2, COMT, ERCC2 , and XRCC3 emerged to be increased more than 2-fold compared to the parental U-2OS cell line, whereas four genes, ATM, ATR, TP53 , and XPA were downregulated in CDDP-resistant variants. Considering all three CDDP-resistant variants together, the differential gene expression tool of the CLC GWB identified the downregulation of ATM, ATR , and TP53 as significant with a Bonferroni corrected p -value < 0.05. In Saos-2-derived CDDP-resistant variants, six genes were increased more than 2-fold: ABCB1, ABCC2 and XRCC3 in all three variants, whereas ACYP2, COMT and ERCC2 only in Saos-2/CDDP300, the variant with the lowest resistance level. CDDP-resistant variants also presented downregulation of ATM, ATR, GSTP1, TPMT, and XPA genes. Evaluating all three CDDP-resistant variants together, a significant difference after Bonferroni correction with a p -value < 0.05 was identified for upregulation of ABCB1 and downregulation of TPMT and XPA . Similarities between CDDP-resistant and CDDP-sensitive cell lines were assessed by using the heatmap tool of the CLC GWB, including all 14 genes . Two main clusters were revealed. One consisted of two clusters formed by all six CDDP-resistant variants clearly separated from their two parental cell lines. The 10 drug-sensitive cell lines formed the second main cluster, which was mostly separated from that of CDDP-resistant variants. As also shown in , the group of 14 genes resulted to be divided in 6 clusters, with genes belonging to the same family mostly grouped together. In this study, a custom mmNGS approach has been used to study 28 SNPs of 14 genes, contemporarily on the DNA and RNA level, in 6 CDDP-resistant and 12 CDDP-sensitive human HGOS cell lines. To our knowledge, this innovative approach has not been used so far for pharmacogenomic studies. The successful validation of the DNA variant calling by TaqMan genotyping confirmed that this approach is an appropriate method to study even rare SNPs. Compared to genotyping by single TaqMan assays, the custom mmNGS approach is faster and also offers the possibility to identify additional SNPs mapping to the target region. Moreover, small targeted panels allow the pooling of higher sample numbers compared to whole-genome NGS. Another advantage of the mmNGS approach is the low amount of starting material that is required for library preparation, which facilitates the application of this method to tumor tissue samples. In addition, the simultaneous analysis of SNPs on the DNA and RNA level, as well as the possibility to estimate the level of RNA expression associated with the polymorphic gene status allow for a direct correlation between the genotype status with the biological function of each SNP. The frequencies of variant alleles per cell line ranged from 9 (found in IOR/OS15 and MG-63) to 22 (detected in IOR/SARG), confirming the heterogeneity and high genetic instability of HGOS. Particular genotype distributions were found for 9 SNPs. Five SNPs that had been reported in association with CDDP-related ototoxicity, ACYP2 rs1872328 , ABCC2 rs17222723 , TPMT rs12201199, rs1142345, and rs1800460 , were present only in the wild-type status in all cell lines. Two SNPs, ABCC2 rs717620 and GSTP1 rs1695, were found as homozygous wild-type or heterozygous but not as homozygous variant. These findings suggest that the variant allele of these seven SNPs could be of biological disadvantage in HGOS tumor cells. Differently, the two SNPs of TP53 , rs1042522 and rs1642785, were identified either in a homozygous wild-type or variant but not heterozygous status. The most relevant SNPs that emerged in this study to be associated with the development of CDDP resistance were GSTP1 rs1695, ERCC2 rs13181, ERCC2 rs1799793, and ERCC1 rs11615. The GSTP1 rs1695 was the only SNP for which the genotype changes found in the RNA-derived libraries differed from those revealed in DNA-derived libraries. Interestingly, in all five drug-sensitive cell lines with the heterozygous genotype in the DNA (U-2OS, Saos-2, IOR/10, IOR/14, IOR/18), the genotype status in the RNA was homozygous wild-type. The presence of the variant also in the RNA of all six CDDP-resistant cell lines, with the consequent amino acid change Ile105Val, strongly suggests that the AG genotype is associated with reduced CDDP response. These findings further support the previously demonstrated relevance of GSTP1 enhanced enzymatic activity in these CDDP-resistant HGOS cell lines . The pharmacogenomic findings emerged from the present study thus indicate that the increase in GSTP1 activity observed in CDDP-resistant variants is correlated with the transition to the AG genotype of the rs1695 polymorphism and the consequent Ile105Val amino acid change. This observation is concordant with the data reported in almost all germline studies. A significant association between AG+GG genotypes and poor histological response as well as decreased event-free and overall survival was observed in five studies whereas one study reported the GG genotype to be associated with good response . Interestingly, GSTP1 rs1695 was excluded from further analyses in the study by Goricar and co-workers because the genotype frequencies of rs1695 were not in Hardy Weinberg Equilibrium . Since their study was performed on paraffin embedded HGOS tumor tissue samples and not on DNA extracted from lymphocytes, as almost all pharmacogenetic analyses, their observation is concordant with our data obtained on HGOS cell lines. Genotype changes in relation to CDDP resistance were found for the two non-synonymous SNPs ERCC2 rs1799793 and rs13181 and the synonymous ERCC1 rs11615 at the DNA and RNA level. All three SNPs have been reported to be associated with survival and toxicity, but the data are quite discordant . However, the ERCC2 rs1799793 GG genotype was reported in association with poor event-free survival compared to the GA +AA genotypes and ERCC2 rs13181 AA with poor response to chemotherapy compared to AC+CC . Our data obtained in CDDP-resistance cell lines confirm the relevance of these two SNPs and suggest that they could serve as biomarkers. In one germline study performed on 130 patients with osteosarcoma treated with neoadjuvant cisplatin-based therapy in combination with doxorubicin, methotrexate, and ifosfamide the ERCC2 rs13181 and ERCC2 rs1799793 SNPs were associated with survival . The authors suggested that the amino acid change that occurred as a result of the mutation reduced the ability of the enzyme ERCC2 to repair DNA thus resulting in greater efficacy of cisplatin. In our study this finding seems to be confirmed by the fact that the most resistant cell line returned to the wild-type genotype, restoring the repair capacity of the enzyme with the consequent increased resistance to the chemotherapeutic agent. The functional consequences of ERCC2 rs1799793 and rs13181 on the protein structure and stability have recently been elucidated . Molecular dynamics simulation of the native ERCC2 protein and the variant protein with the substitution of Asp by Asn revealed that rs1799793 resulted in a destabilized, less active protein compared to the native. In addition, the ERCC2 rs13181 variant caused the loss of C-terminal alpha-helix and beta-sheet . Although these secondary structures were lost, the overall folding was not disrupted, suggesting that this polymorphic variation has a less relevant impact on protein function. For ERCC1 rs11615, which changed to the homozygous wild-type genotype status in the Saos-2 CDDP-resistant variant with the highest level of resistance, two germline studies reported similar evidence being better survival associated with the TT compared to the CC genotype . However, five studies reported the opposite evidence for overall survival . Differential gene expression analysis identified dysregulations in CDDP-resistant variants compared to their parental cell lines suggesting that development of CDDP resistance influences not only genes of the NER pathway, which is known to be mainly responsible for the removal of CDDP-associated DNA adducts, but also genes belonging to other DNA repair mechanisms, such as ATM and ATR . It has been shown that cancer cells that are deficient in one DNA repair pathway can activate other functional repair pathways, which underlines the importance to study not only one of them for treatment optimization . The biological consequence of the significant downregulation of ATM and ATR observed in Saos-2/CDDP-resistant variants is a relevant finding that needs to be further explored, since inhibitors against ATR have already been used in clinical trials in other cancers . On the other hand, also the upregulation of ACYP2 and COMT , although not significant, warrants attention because SNPs of these two genes had been described to be associated with ototoxicity after CDDP treatment . In conclusion, the mmNGS approach emerged to be an innovative, reliable tool to detect genetic polymorphisms at both DNA and RNA level, allowing for the identification of genetic changes causally related to CDDP resistance in HGOS cells. Once further validated in tumor samples series, these SNPs could be useful to identify patients with reduced sensitivity to CDDP-based therapy and/or increased susceptibility to CDDP-related adverse toxicities. 4.1. Cell Lines The study was performed on a panel of 12 drug-sensitive human HGOS cell lines: U-2OS, Saos-2, MG-63, and HOS (purchased from the American Type Culture Collection ATCC, Rockville, MD, USA) and IOR/OS9, IOR/OS10, IOR/OS14, IOR/OS15, IOR/OS18, IOR/OS20, IOR/MOS, IOR/SARG, which were established from tumor specimens at the Laboratory of Experimental Oncology of the Orthopaedic Rizzoli Institute . The panel of 6 CDDP-resistant variants derived either from U-2OS (U-2OS/CDDP300, U-2OS/CDDP1μg, U-2OS/CDDP4μg) or Saos-2 (Saos-2/CDDP300, Saos-2/CDDP1μg, Saos-2/CDDP6μg) CDDP-sensitive cell lines, as previously reported . Resistant variants were established by exposing parental cells to step-by-step increases of CDDP concentrations. The in vitro continuous drug exposure resulted in the establishment of variants resistant to 300 ng/mL CDDP (U-2OS/CDDP300 and Saos-2/CDDP300), 1 µg/mL (U-2OS/CDDP1µg and Saos-2/CDDP1µg), 4 µg/mL (U-2OS/CDDP4µg), or 6 µg/mL CDDP (Saos-2/CDDP6µg). Establishment of an adequate in vitro growth at each new CDDP concentration required approximately 10–12 weeks (corresponding to 8–10 in vitro passages), and variants were considered as definitely stabilized when reaching the 20th in vitro passage. CDDP sensitivity of each cell line was expressed as IC50 (drug concentration resulting in 50% inhibition of cell growth after 96 h of in vitro treatment). The fold-increase in CDDP resistance of each variant was determined by comparing its IC50 value with that of its corresponding parental cell line and, as previously described, ranged from 4.0- to 62.5-fold for U-2OS variants and from 7.4- to 112.1-fold for Saos-2 variants . All cell lines were cultured in Iscove’s modified Dulbecco’s medium (IMDM) added with 10% fetal bovine serum (Biowhittaker Europe, Cambrex-Verviers, Belgium) and maintained in a humified atmosphere with 5% CO 2 at 37 °C. Drug resistant variants were continuously cultured in the presence of the CDDP concentrations used for their selection. Cell pellets were prepared according to standard procedures when cells were confluent, snap-frozen, and stored at −80 °C. DNA fingerprint analyses were performed for all cell lines using 17 polymorphic short tandem repeat sequences confirming their identity. 4.2. Extraction of Nucleic Acids DNA and RNA were simultaneously isolated and purified from the same pellet obtained from each cell line by using the AllPrep DNA/RNA mini kit (Qiagen, Hilden, Germany) according to the manufacturers’ instructions. During this process, the DNA and RNA were isolated from the entire sample by passing the lysate first to the AllPrep DNA spin column to isolate high molecular weight total genomic DNA and through the AllPrep RNA spin column to isolate total RNA. A DNA and RNA quality check was performed for all samples by spectrophotometry (NP-80, Implen, Munich, Germany). All RNA samples were run on a 2100 Bioanalyzer system (Agilent, Santa Clara, CA, USA) using the RNA 6000 kit (Agilent, Santa Clara, CA, USA). 4.3. Custom Multi-Modal Targeted Next Generation Sequencing (mmNGS) Library preparation was performed according to the QIAseq Multimodal Panel handbook v06/2020 (Qiagen, Hilden, Germany) for small panels. The primers for the libraries derived from DNA were designed for 28 SNPs of 14 genes related to DNA repair, CDDP transport and detoxification, and TP53 . For the libraries prepared from RNA, primers were designed for the SNPs mapping to exons of the 14 genes, thus allowing RNA variant calling. The specific design of the RNA panel enabled also RNA expression analysis of these 14 genes (technical service Qiagen, Hilden, Germany). This approach uses integrated unique molecular indices (UMIs) which improves the specificity of variant detection. DNA- and RNA-derived libraries were prepared for all 18 cell lines. Prior to library preparation, the nucleic acid concentrations were determined fluorometrically by Qubit high-sensitivity assays on a Qubit reader version 4.0 (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy). For library preparation, the input amount was 40 ng of DNA and 100 ng of RNA. All libraries were run on a 2100 Bioanalyzer system (Agilent, Santa Clara, CA, USA) using the High Sensitivity DNA kit (Agilent, Santa Clara, CA, USA) to check the profile of the samples. The fragment lengths of all libraries ranged between 400 and 600 base pairs, as expected according to the protocol. In order to provide an accurate quantification of the amplifiable libraries, the QIAseq Library Quant Assay kit (Qiagen, Hilden, Germany) was performed on a real-time PCR system (7900HT Fast Real-time PCR system; (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy) for all of them. For sequencing, libraries were diluted to 1.2 pM, pooled together and analyzed by paired-end sequencing on a NextSeq 500 instrument (Illumina Inc., San Diego, CA, USA) using a mid-output reagent kit v2.5 (300 cycles) with a custom sequencing primer provided with the library preparation kit. 4.4. mmNGS Data Analysis by CLC Genomics Workbench All bioinformatic analyses were performed using the CLC GWB software (Qiagen Bioinformatics, Aarhus, Denmark) v22.04. FastQ files were downloaded from the BaseSpace cloud (Illumina Inc., San Diego, CA, USA) and imported in the CLC GWB (Qiagen Bioinformatics, Aarhus, Denmark). For the detection of DNA variants and gene expression, the FastQ files were analyzed using the Biomedical Genomics Analysis plugin running the Qiaseq Multimodal Analysis workflow. The DNA and RNA reads were aligned to the human genome hg38 reference sequence and filtered using a coverage of 100× and a variant allele frequency (VAF) higher than 3%. For RNA variant calling a custom workflow was provided by the Qiagen bioinformatics support. This workflow worked with UMIs, mapped the reads on the human genome hg38 and filtered with specific parameters for rare RNA variant calling. For differential gene expression analysis between the groups of drug-resistant variants and their respective parental cell line, the differential expression tools of the CLC GWB were used and changes with a Bonferroni corrected p -value < 0.05 were considered significant. For hierarchical clustering analysis the tool for creating heatmaps of the CLC GWB was used with Euclidean distance and complete linkage. 4.5. SNP Genotyping by Real-Time PCR In total, 24 of the 28 selected polymorphisms were validated by real-time genotyping PCR . TaqMan SNP genotyping assays (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy) or drug metabolizing enzymes (DMEs) assays, which had functionally been tested, were used to validate the performance of the mmNGS approach. The genotyping experiments were performed according to standard protocols using 10 ng DNA as input material using the VIIA 7 DX realtime PCR system (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy) and the results were analyzed with the TaqMan Genotyper software (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy), which generated allelic discrimination cluster plots to determine the genotype of each SNP. The study was performed on a panel of 12 drug-sensitive human HGOS cell lines: U-2OS, Saos-2, MG-63, and HOS (purchased from the American Type Culture Collection ATCC, Rockville, MD, USA) and IOR/OS9, IOR/OS10, IOR/OS14, IOR/OS15, IOR/OS18, IOR/OS20, IOR/MOS, IOR/SARG, which were established from tumor specimens at the Laboratory of Experimental Oncology of the Orthopaedic Rizzoli Institute . The panel of 6 CDDP-resistant variants derived either from U-2OS (U-2OS/CDDP300, U-2OS/CDDP1μg, U-2OS/CDDP4μg) or Saos-2 (Saos-2/CDDP300, Saos-2/CDDP1μg, Saos-2/CDDP6μg) CDDP-sensitive cell lines, as previously reported . Resistant variants were established by exposing parental cells to step-by-step increases of CDDP concentrations. The in vitro continuous drug exposure resulted in the establishment of variants resistant to 300 ng/mL CDDP (U-2OS/CDDP300 and Saos-2/CDDP300), 1 µg/mL (U-2OS/CDDP1µg and Saos-2/CDDP1µg), 4 µg/mL (U-2OS/CDDP4µg), or 6 µg/mL CDDP (Saos-2/CDDP6µg). Establishment of an adequate in vitro growth at each new CDDP concentration required approximately 10–12 weeks (corresponding to 8–10 in vitro passages), and variants were considered as definitely stabilized when reaching the 20th in vitro passage. CDDP sensitivity of each cell line was expressed as IC50 (drug concentration resulting in 50% inhibition of cell growth after 96 h of in vitro treatment). The fold-increase in CDDP resistance of each variant was determined by comparing its IC50 value with that of its corresponding parental cell line and, as previously described, ranged from 4.0- to 62.5-fold for U-2OS variants and from 7.4- to 112.1-fold for Saos-2 variants . All cell lines were cultured in Iscove’s modified Dulbecco’s medium (IMDM) added with 10% fetal bovine serum (Biowhittaker Europe, Cambrex-Verviers, Belgium) and maintained in a humified atmosphere with 5% CO 2 at 37 °C. Drug resistant variants were continuously cultured in the presence of the CDDP concentrations used for their selection. Cell pellets were prepared according to standard procedures when cells were confluent, snap-frozen, and stored at −80 °C. DNA fingerprint analyses were performed for all cell lines using 17 polymorphic short tandem repeat sequences confirming their identity. DNA and RNA were simultaneously isolated and purified from the same pellet obtained from each cell line by using the AllPrep DNA/RNA mini kit (Qiagen, Hilden, Germany) according to the manufacturers’ instructions. During this process, the DNA and RNA were isolated from the entire sample by passing the lysate first to the AllPrep DNA spin column to isolate high molecular weight total genomic DNA and through the AllPrep RNA spin column to isolate total RNA. A DNA and RNA quality check was performed for all samples by spectrophotometry (NP-80, Implen, Munich, Germany). All RNA samples were run on a 2100 Bioanalyzer system (Agilent, Santa Clara, CA, USA) using the RNA 6000 kit (Agilent, Santa Clara, CA, USA). Library preparation was performed according to the QIAseq Multimodal Panel handbook v06/2020 (Qiagen, Hilden, Germany) for small panels. The primers for the libraries derived from DNA were designed for 28 SNPs of 14 genes related to DNA repair, CDDP transport and detoxification, and TP53 . For the libraries prepared from RNA, primers were designed for the SNPs mapping to exons of the 14 genes, thus allowing RNA variant calling. The specific design of the RNA panel enabled also RNA expression analysis of these 14 genes (technical service Qiagen, Hilden, Germany). This approach uses integrated unique molecular indices (UMIs) which improves the specificity of variant detection. DNA- and RNA-derived libraries were prepared for all 18 cell lines. Prior to library preparation, the nucleic acid concentrations were determined fluorometrically by Qubit high-sensitivity assays on a Qubit reader version 4.0 (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy). For library preparation, the input amount was 40 ng of DNA and 100 ng of RNA. All libraries were run on a 2100 Bioanalyzer system (Agilent, Santa Clara, CA, USA) using the High Sensitivity DNA kit (Agilent, Santa Clara, CA, USA) to check the profile of the samples. The fragment lengths of all libraries ranged between 400 and 600 base pairs, as expected according to the protocol. In order to provide an accurate quantification of the amplifiable libraries, the QIAseq Library Quant Assay kit (Qiagen, Hilden, Germany) was performed on a real-time PCR system (7900HT Fast Real-time PCR system; (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy) for all of them. For sequencing, libraries were diluted to 1.2 pM, pooled together and analyzed by paired-end sequencing on a NextSeq 500 instrument (Illumina Inc., San Diego, CA, USA) using a mid-output reagent kit v2.5 (300 cycles) with a custom sequencing primer provided with the library preparation kit. All bioinformatic analyses were performed using the CLC GWB software (Qiagen Bioinformatics, Aarhus, Denmark) v22.04. FastQ files were downloaded from the BaseSpace cloud (Illumina Inc., San Diego, CA, USA) and imported in the CLC GWB (Qiagen Bioinformatics, Aarhus, Denmark). For the detection of DNA variants and gene expression, the FastQ files were analyzed using the Biomedical Genomics Analysis plugin running the Qiaseq Multimodal Analysis workflow. The DNA and RNA reads were aligned to the human genome hg38 reference sequence and filtered using a coverage of 100× and a variant allele frequency (VAF) higher than 3%. For RNA variant calling a custom workflow was provided by the Qiagen bioinformatics support. This workflow worked with UMIs, mapped the reads on the human genome hg38 and filtered with specific parameters for rare RNA variant calling. For differential gene expression analysis between the groups of drug-resistant variants and their respective parental cell line, the differential expression tools of the CLC GWB were used and changes with a Bonferroni corrected p -value < 0.05 were considered significant. For hierarchical clustering analysis the tool for creating heatmaps of the CLC GWB was used with Euclidean distance and complete linkage. In total, 24 of the 28 selected polymorphisms were validated by real-time genotyping PCR . TaqMan SNP genotyping assays (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy) or drug metabolizing enzymes (DMEs) assays, which had functionally been tested, were used to validate the performance of the mmNGS approach. The genotyping experiments were performed according to standard protocols using 10 ng DNA as input material using the VIIA 7 DX realtime PCR system (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy) and the results were analyzed with the TaqMan Genotyper software (Thermo Fisher Scientific by Life Technologies Italia, Monza, Italy), which generated allelic discrimination cluster plots to determine the genotype of each SNP.
Jack of all trades, master of one: domain-specific and domain-general contributions to perceptual expertise in visual comparison
3258f173-a602-4a16-9bbd-1344675b1c2c
11519270
Forensic Medicine[mh]
The ability to spot differences or similarities between patterns—like comparing fingerprints or recognising faces—varies widely. Forensic science examiners in select disciplines excel in this in tasks within their domain of expertise (e.g., a fingerprint examiner comparing fingerprints). However, how they fare outside their domain of expertise is less well understood (e.g., a fingerprint examiner comparing faces). In this study, we recruited face, fingerprint, and firearms examiners to explore if their skill generalises beyond their domain of expertise. We found a hierarchy of expert performance: examiners outperformed other examiners and novices within their domain of expertise but also outperformed novices outside their domain. Examiners’ skill does generalise, but only to a certain extent. As accuracy is maximised in an examiner’s domain of expertise, our results do not suggest that professional performance would be improved by examiners practicing outside their trained discipline, but rather that examiners possess or acquire an ability that partially generalises across domains through training or experience. This implies that there may be common mechanisms underpinning the generalisation of visual comparison skills across domains. Future research could uncover these mechanisms to use in training programmes and develop evidence-based programmes to fast-track the performance of new trainees. Our results also suggest that there is individual variation in skill amongst both professionals and novices. Forensic science organisations could also improve professional performance by recruiting people with a natural aptitude for visual comparison from the general population. Expertise is typically characterised as narrow and domain-specific (Bedard & Chi, ; Ericsson et al., , ). It is thought that expert skill is developed via experience and deliberate practice within an expert’s primary domain of expertise (Ericsson, ; Ericsson et al., ; Keith & Ericsson, ). Popular culture estimates suggest that it takes 10,000 h to become an expert (Gladwell, ). It has thus long been thought that ‘experts excel mainly in their own domain’ (Chi et al., , p. xvii). Many studies confirm that domain-specific expertise rarely generalises beyond an expert’s domain of experience. For example, orthodontists can judge face symmetry better than novices, but not non-face stimuli (Jackson et al., ), super-memorizers, those with superior associative memory, do not have superior face recognition memory (Ramon et al., ), and modern car experts can discriminate modern cars better than novices, but not antique cars (Bukach et al., ). Yet, many other factors are at play in how expertise develops, such as individual differences in talent, cognitive abilities, genetics, and personality (Hambrick et al., ). The acquisition of expertise interacts with individual differences and domain-general abilities in many different disciplines where some people acquire expertise faster in a given domain than others (Kaufman, ). This is seen in experts in domains like bird or mineral expertise (Martens et al., ) to chess expertise (Smith et al., ). If interaction between individual differences and the acquisition of expertise is also generalisable across domains, it could be used to predict subsequent expertise. One area with important real-world consequences where domain- general individual differences and domain- specific expertise interact is forensic visual comparison (Growns et al., ; Phillips et al., ). Visual comparison is a complex task where visual stimuli are compared to determine whether they are from the same or different sources. Forensic examiners like face, fingerprint, and firearms examiners complete visual comparison tasks professionally to link or exclude evidence from crime scenes (National Academy of Sciences, ; President’s Council of Advisors on Science and Technology, ). For example, face examiners compare images of faces to identify suspects of crime in CCTV or prevent passport fraud (White et al., ). Similarly, fingerprint examiners compare fingerprints found at crime scenes to judge whether they are from a specific suspect or a different person (Busey & Vanderkolk, ), and firearms examiners compare cartridge cases fired from guns to link a specific bullet to a specific gun (Mattijssen et al., ). Face, fingerprint, and firearms examiners possess domain-specific perceptual expertise as they outperform the norm in visual comparison within their area of experience (Busey & Vanderkolk, ; Gutierrez & Prokesch, ; Mattijssen et al., ; Tangen et al., ; White et al., ). Yet, there is also emerging evidence of domain-general ability amongst some forensic examiners. Fingerprint examiners outperform novices in face comparison, a task outside their domain of expertise (Phillips et al., ). Similarly, face examiners outperform novices in fingerprint comparison, also a task beyond their expertise (Towler et al., ). This suggests that the perceptual expertise of forensic examiners may lend generalise skill and enable above-average performance in domains outside of their expertise. It is possible that individual differences in visual comparison interact with the acquisition of expertise in ways that are not yet understood. Individual differences in visual comparison are seen even amongst professional forensic examiners. For example, face examiners’ area under the curve (AUC) scores on a proficiency test designed to reflect professional casework ranged from 0.72 to 0.99 (Towler et al., ; see also Sexton et al., ). Similarly, fingerprint trainees’ performance on fingerprint comparison varies from 77% to 87% accuracy even after 12 months of training (Searston & Tangen, ). Face examiners’ face comparison ability is also not correlated with their length of employment (White et al., ), and variation in performance is also seen amongst firearms examiners (Gutierrez & Prokesch, ; Mattijssen et al., ). Further, individual differences in skill amongst forensic trainees before training are reliable predictors of future professional performance in fingerprint comparison (Searston & Tangen, ). Even in the general population, there is variation in visual comparison ability amongst those without forensic science training or experience (Growns et al., ). Novices’ visual comparison skill generalises across different complex visual stimuli: top-performing novices who excel at comparing one type of stimuli (e.g., fingerprints) also excel with other types of stimuli (e.g. faces or firearms; Growns et al., ). Pre-existing individual differences in visual comparison ability in the general population may interact with the development of expertise in forensic science in ways that remain unclear. Pre-existing variation in skill may be one reason that forensic examiners’ ability generalises beyond their domain of expertise. Yet, this is difficult to explore as the development of expertise can also reduce existing variance in ability. We thus aim to explore how the perceptual expertise of forensic examiners generalises across domains at both a group and individual level, across a range of domain-specific, domain-general, and entirely novel tasks. In this paper, we investigate the relationship between domain-general cognitive abilities and expertise by exploring the domain-specific and domain-general contributions to forensic examiners’ perceptual expertise. Forensic expertise provides an opportunity to investigate the relationship between expertise and domain-general mechanisms as experts can make essentially the same judgement (i.e., same or different source) about different stimuli (i.e., those within and outside their domain of expertise). At a group level, we investigated whether forensic examiners from three different disciplines (faces, fingerprints, and firearms) outperform each other in their expert domain (i.e., domain-specific), non-expert-domain (i.e., domain-general), and an entirely novel visual comparison task. At an individual level, we examined whether individual differences in examiners’ expert-domain visual comparison performance were predicted by non-expert-domain ability, and by other personality (intrinsic motivation) and cognitive abilities (statistical learning). We recruited face, fingerprint, and firearms examiners to complete four visual comparison tasks (face, fingerprints, firearms, and novel-objects) and two discriminant validity tasks (intrinsic motivation and statistical learning), with novices as a control comparison sample. The pre-registration, data, and analysis scripts can be found at https://osf.io/2ahsq/ . Participants Participants were 85 forensic examiners (13 face, 42 fingerprint, and 30 firearms) recruited via a snowball-sampling method with emails sent to forensic organisations and mailing lists. The sample size was determined by the number of forensic examiners recruited during our pre-registered data acquisition period. Forensic examiners first indicated whether or not they would describe themselves as a forensic scientist or practitioner, and then nominated the discipline that was their primary area of specialisation (i.e., face, fingerprint, firearms, or other discipline). They then provided information about their experience and employment within their primary area of specialisation (see Table ). We then recruited 93 novices from Prolific Academic as a sample-size-matched comparison, including an additional 10% ( n = 8) to account for attrition. Participants from Prolific were required to have normal or corrected-to-normal vision and an approval rate on Prolific of 95% or above. We elected to use a novice comparison sample for ease of recruitment, but it is important to note that previous research has shown that comparable professional samples without domain-specific training do not perform at the same level as experts. For example, lawyers do not outperform firearms examiners (Gutierrez & Prokesch, ), and facial reviewers who compare passport photos to detect passport fraud do not outperform face examiners who receive extensive training and mentorship in face comparison (White et al., , ). Novices were paid £6.50 for participation in the approximately 60-min study, examiners were not paid for their involvement. To motivate performance, all participants had the chance to win one of ten £500 Amazon vouchers that were awarded to the top two performers in each task, including statistical learning (except the intrinsic motivation inventory). Novices were not informed that examiners were also participating in this experiment to ensure their incentive to participate was not impaired. It is thus likely that compensation was comparably motivating for novices and examiners. No participants were excluded based on our exclusion criteria of not passing at least three of four attention-check questions. Demographic and professional practice information for each group can be seen in Table . Participants were 85 forensic examiners (13 face, 42 fingerprint, and 30 firearms) recruited via a snowball-sampling method with emails sent to forensic organisations and mailing lists. The sample size was determined by the number of forensic examiners recruited during our pre-registered data acquisition period. Forensic examiners first indicated whether or not they would describe themselves as a forensic scientist or practitioner, and then nominated the discipline that was their primary area of specialisation (i.e., face, fingerprint, firearms, or other discipline). They then provided information about their experience and employment within their primary area of specialisation (see Table ). We then recruited 93 novices from Prolific Academic as a sample-size-matched comparison, including an additional 10% ( n = 8) to account for attrition. Participants from Prolific were required to have normal or corrected-to-normal vision and an approval rate on Prolific of 95% or above. We elected to use a novice comparison sample for ease of recruitment, but it is important to note that previous research has shown that comparable professional samples without domain-specific training do not perform at the same level as experts. For example, lawyers do not outperform firearms examiners (Gutierrez & Prokesch, ), and facial reviewers who compare passport photos to detect passport fraud do not outperform face examiners who receive extensive training and mentorship in face comparison (White et al., , ). Novices were paid £6.50 for participation in the approximately 60-min study, examiners were not paid for their involvement. To motivate performance, all participants had the chance to win one of ten £500 Amazon vouchers that were awarded to the top two performers in each task, including statistical learning (except the intrinsic motivation inventory). Novices were not informed that examiners were also participating in this experiment to ensure their incentive to participate was not impaired. It is thus likely that compensation was comparably motivating for novices and examiners. No participants were excluded based on our exclusion criteria of not passing at least three of four attention-check questions. Demographic and professional practice information for each group can be seen in Table . Visual comparison tasks Face comparison task Participants completed 40 face comparison trials (20 match and 20 non-match) from the Glasgow Face-Matching Task 2—High-Version (GFMT2-High; White et al., ). Participants viewed two faces side-by-side and were asked ‘Are these images of the same person or two different people?’ on each trial. They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen (see Fig. ). To best capture the skill of face comparison experts, we used the the GFMT2-high because it contains trials that are designed to discriminate between top-performers (see White et al., ). That is, trials in the GFMT2-High were selected based on the highest item-to-test correlations for individuals with above-median performance—or how well accuracy on each trial predicts a participant’s overall performance (Guilford, ; Wilmer et al., ). Fingerprint comparison task Participants completed 40 fingerprint comparison trials (20 match and 20 non-match) from the fingerprint comparison task in Growns et al. . Participants viewed two fingerprints side-by-side and were asked ‘Are these fingerprints from the same person or two different people?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best measure the performance of fingerprint comparison experts, we selected trials using the same method as the GFMT2-High: 40 trials were chosen with the highest item-to-test correlations for individuals with above-median performance in previous research (i.e., Experiment 2 in Growns et al., ). Firearms comparison task Participants completed 40 firearms comparison trials (20 match and 20 non-match) from the firearms comparison task in Growns et al. . Participants viewed two cartridge cases side-by-side and were asked ‘Are these cartridge cases from the same firearm or two different firearms?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best measure the performance of firearms comparison experts, we selected trials using the same method as above of selecting 40 trials with the highest item-to-test correlations for above-median performance (i.e., in Experiment 2 in Growns et al., ). Novel-object comparison Participants completed 40 novel-object comparison trials (20 match and 20 non-match) from the Novel-Object-Matching Test (Growns et al., ). Participants viewed two novel-objects side-by-side and were asked ‘Are these prints from the same stamping tool or two different stamping tools?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best capture the performance of all experts in a non-expert-domain task, we selected trials using the same method as above of selecting 40 trials with the highest item-to-test correlations for above-median performance (i.e., in Experiment 2 in Growns et al., , , ). Discriminant validity tasks Intrinsic motivation inventory Participants completed a measure of their intrinsic motivation and subjective experience during the experiment: the Intrinsic Motivation Inventory (McAuley et al., ). The Intrinsic Motivation Inventory is a validated measure of intrinsic motivation because it has acceptable reliability and stability (McAuley et al., ; Tsigilis & Theodosiou, ) and has been used across multiple domains—from education to mental health research (Choi et al., ; Leng et al., ; Monteiro et al., ). Participants completed three sub-scales of the inventory: the effort, enjoyment , and perceived competence sub-scales. They answered questions on a 7-point Likert scale from ‘Not at All True’ to ‘Very True’. They answered questions such as: ‘I put a lot of effort into this’ (effort sub-scale); ‘I enjoyed doing this activity very much’ (enjoyment sub-scale); and ‘I am satisfied with my performance in this task’ (perceived competence sub-scale). A full list of the questions can be found at https://selfdeterminationtheory.org/intrinsic-motivation-inventory/ . Statistical learning task Participants completed a visual statistical learning task adapted from previous research (Growns & Martire, ; Growns et al., ) where participants first completed an exposure phase and then a test phase. During the exposure phase, participants viewed 60 complex patterns (see Fig. ) in a randomised order (each pattern displayed for 3-s with a 200-ms interval in-between) and were instructed to pay attention to them as they would be asked some questions about them afterwards. Each pattern contained different features (see Fig. ) on the ends of the pattern ‘arms’ that occurred with different statistical frequencies across all patterns (e.g. feature ‘A’ appeared in 10% of patterns, while feature ‘B’ appeared in 20% of patterns). During the test phase, participants completed 45 trials where they were tested on how well they learned the frequencies, by being asked which of 2, 3, or 4 features were more familiar to them. Procedure All participants completed the experiment via Qualtrics . They first consented to participate in the study and then provided brief demographic and professional practice information (examiners only), received instructions, and then completed the five visual comparison tasks and statistical learning tasks in a random order, followed by the intrinsic motivation task. Finally, participants were debriefed. Dependent measures Performance in each visual comparison task was measured using signal-detection measures of computed sensitivity and response bias ( d ′ and C ; Phillips et al., ; Stanislaw & Todorov, ). To calculate sensitivity and bias in visual comparison, we coded hits as correct judgements on match trials and false alarms as incorrect judgements on non-match trials (see Phillips et al., for further discussion on the use of signal-detection measure in forensic science decision-making). Higher d ′ values indicate higher sensitivity to the presence of a target stimulus independent of a tendency to respond ‘same’ or ‘different’ (response bias) and higher values are typically interpreted as higher ‘accuracy’ in a task. We also calculated participants’ criterion ( C )—a measure of tendency to respond ‘same’ or ‘different’. Intrinsic motivation scores were calculated by averaging participants’ Likert-scale responses on the effort, enjoyment , and perceived competence inventory sub-scales (including the reverse-scored items). Statistical learning scores were calculated by averaging the number of trials participants correctly chose the most frequent feature, where higher scores indicated better statistical learning. Analytical approach We compared visual comparison sensitivity between expert-domain examiners, non-expert-domain examiners, and novices. We pre-registered our intention to recruit a specific sample size of examiners ( n = 50 per group), but we did not reach this sample size during our pre-registered data collection period. We therefore adapted our analytical approach to maximise the number of participants in exploratory analyses by categorising each forensic examiner as either an expert domain examiner (e.g., a fingerprint examiner’s fingerprint comparison sensitivity) or a non-expert-domain examiner (e.g., a face or firearms examiner’s fingerprint sensitivity). For the exploratory group-level analyses reported in-text, we thus compared the sensitivity between expert-domain examiners, non-expert domain examiners, and novices. To account for unequal variances between groups, Welch corrections were applied using the t.test function in the base stats package in R to all follow-up comparisons (Delacre et al., ). For the exploratory individual differences analyses reported in-text, we calculated Pearson’s correlations using the base stats package in R to investigate the relationship between examiners’ expert domain visual comparison sensitivity, their aggregate non-expert-domain sensitivity, and their novel visual comparison sensitivity (e.g., Novel-Object Matching sensitivity). We also calculated exploratory Bayes correlations using the BayesFactor package in R to examine the likelihood of the data under the null hypothesis (i.e., absence of correlations) compared to an alternative hypothesis (Morey et al., ). In our study, we combined multiple test scores into aggregate scores for our correlational analyses. To standardise these scores, we used Z-score transformations of sensitivity ( d ′) scores. Specifically, we calculated these Z-scores based on the mean ( M ) and standard deviation (SD) of novice examiners’ sensitivity. This approach ensured that the examiners’ sensitivity scores were standardised relative to the normative performance of novices. As we were primarily interested in how individual differences were affected by expertise, we computed these correlations for forensic examiners only. Face comparison task Participants completed 40 face comparison trials (20 match and 20 non-match) from the Glasgow Face-Matching Task 2—High-Version (GFMT2-High; White et al., ). Participants viewed two faces side-by-side and were asked ‘Are these images of the same person or two different people?’ on each trial. They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen (see Fig. ). To best capture the skill of face comparison experts, we used the the GFMT2-high because it contains trials that are designed to discriminate between top-performers (see White et al., ). That is, trials in the GFMT2-High were selected based on the highest item-to-test correlations for individuals with above-median performance—or how well accuracy on each trial predicts a participant’s overall performance (Guilford, ; Wilmer et al., ). Fingerprint comparison task Participants completed 40 fingerprint comparison trials (20 match and 20 non-match) from the fingerprint comparison task in Growns et al. . Participants viewed two fingerprints side-by-side and were asked ‘Are these fingerprints from the same person or two different people?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best measure the performance of fingerprint comparison experts, we selected trials using the same method as the GFMT2-High: 40 trials were chosen with the highest item-to-test correlations for individuals with above-median performance in previous research (i.e., Experiment 2 in Growns et al., ). Firearms comparison task Participants completed 40 firearms comparison trials (20 match and 20 non-match) from the firearms comparison task in Growns et al. . Participants viewed two cartridge cases side-by-side and were asked ‘Are these cartridge cases from the same firearm or two different firearms?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best measure the performance of firearms comparison experts, we selected trials using the same method as above of selecting 40 trials with the highest item-to-test correlations for above-median performance (i.e., in Experiment 2 in Growns et al., ). Novel-object comparison Participants completed 40 novel-object comparison trials (20 match and 20 non-match) from the Novel-Object-Matching Test (Growns et al., ). Participants viewed two novel-objects side-by-side and were asked ‘Are these prints from the same stamping tool or two different stamping tools?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best capture the performance of all experts in a non-expert-domain task, we selected trials using the same method as above of selecting 40 trials with the highest item-to-test correlations for above-median performance (i.e., in Experiment 2 in Growns et al., , , ). Participants completed 40 face comparison trials (20 match and 20 non-match) from the Glasgow Face-Matching Task 2—High-Version (GFMT2-High; White et al., ). Participants viewed two faces side-by-side and were asked ‘Are these images of the same person or two different people?’ on each trial. They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen (see Fig. ). To best capture the skill of face comparison experts, we used the the GFMT2-high because it contains trials that are designed to discriminate between top-performers (see White et al., ). That is, trials in the GFMT2-High were selected based on the highest item-to-test correlations for individuals with above-median performance—or how well accuracy on each trial predicts a participant’s overall performance (Guilford, ; Wilmer et al., ). Participants completed 40 fingerprint comparison trials (20 match and 20 non-match) from the fingerprint comparison task in Growns et al. . Participants viewed two fingerprints side-by-side and were asked ‘Are these fingerprints from the same person or two different people?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best measure the performance of fingerprint comparison experts, we selected trials using the same method as the GFMT2-High: 40 trials were chosen with the highest item-to-test correlations for individuals with above-median performance in previous research (i.e., Experiment 2 in Growns et al., ). Participants completed 40 firearms comparison trials (20 match and 20 non-match) from the firearms comparison task in Growns et al. . Participants viewed two cartridge cases side-by-side and were asked ‘Are these cartridge cases from the same firearm or two different firearms?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best measure the performance of firearms comparison experts, we selected trials using the same method as above of selecting 40 trials with the highest item-to-test correlations for above-median performance (i.e., in Experiment 2 in Growns et al., ). Participants completed 40 novel-object comparison trials (20 match and 20 non-match) from the Novel-Object-Matching Test (Growns et al., ). Participants viewed two novel-objects side-by-side and were asked ‘Are these prints from the same stamping tool or two different stamping tools?’ on each trial (see Fig. ). They responded by selecting one of two buttons (‘same’ or ‘different’) at the bottom of the screen. To best capture the performance of all experts in a non-expert-domain task, we selected trials using the same method as above of selecting 40 trials with the highest item-to-test correlations for above-median performance (i.e., in Experiment 2 in Growns et al., , , ). Intrinsic motivation inventory Participants completed a measure of their intrinsic motivation and subjective experience during the experiment: the Intrinsic Motivation Inventory (McAuley et al., ). The Intrinsic Motivation Inventory is a validated measure of intrinsic motivation because it has acceptable reliability and stability (McAuley et al., ; Tsigilis & Theodosiou, ) and has been used across multiple domains—from education to mental health research (Choi et al., ; Leng et al., ; Monteiro et al., ). Participants completed three sub-scales of the inventory: the effort, enjoyment , and perceived competence sub-scales. They answered questions on a 7-point Likert scale from ‘Not at All True’ to ‘Very True’. They answered questions such as: ‘I put a lot of effort into this’ (effort sub-scale); ‘I enjoyed doing this activity very much’ (enjoyment sub-scale); and ‘I am satisfied with my performance in this task’ (perceived competence sub-scale). A full list of the questions can be found at https://selfdeterminationtheory.org/intrinsic-motivation-inventory/ . Statistical learning task Participants completed a visual statistical learning task adapted from previous research (Growns & Martire, ; Growns et al., ) where participants first completed an exposure phase and then a test phase. During the exposure phase, participants viewed 60 complex patterns (see Fig. ) in a randomised order (each pattern displayed for 3-s with a 200-ms interval in-between) and were instructed to pay attention to them as they would be asked some questions about them afterwards. Each pattern contained different features (see Fig. ) on the ends of the pattern ‘arms’ that occurred with different statistical frequencies across all patterns (e.g. feature ‘A’ appeared in 10% of patterns, while feature ‘B’ appeared in 20% of patterns). During the test phase, participants completed 45 trials where they were tested on how well they learned the frequencies, by being asked which of 2, 3, or 4 features were more familiar to them. Participants completed a measure of their intrinsic motivation and subjective experience during the experiment: the Intrinsic Motivation Inventory (McAuley et al., ). The Intrinsic Motivation Inventory is a validated measure of intrinsic motivation because it has acceptable reliability and stability (McAuley et al., ; Tsigilis & Theodosiou, ) and has been used across multiple domains—from education to mental health research (Choi et al., ; Leng et al., ; Monteiro et al., ). Participants completed three sub-scales of the inventory: the effort, enjoyment , and perceived competence sub-scales. They answered questions on a 7-point Likert scale from ‘Not at All True’ to ‘Very True’. They answered questions such as: ‘I put a lot of effort into this’ (effort sub-scale); ‘I enjoyed doing this activity very much’ (enjoyment sub-scale); and ‘I am satisfied with my performance in this task’ (perceived competence sub-scale). A full list of the questions can be found at https://selfdeterminationtheory.org/intrinsic-motivation-inventory/ . Participants completed a visual statistical learning task adapted from previous research (Growns & Martire, ; Growns et al., ) where participants first completed an exposure phase and then a test phase. During the exposure phase, participants viewed 60 complex patterns (see Fig. ) in a randomised order (each pattern displayed for 3-s with a 200-ms interval in-between) and were instructed to pay attention to them as they would be asked some questions about them afterwards. Each pattern contained different features (see Fig. ) on the ends of the pattern ‘arms’ that occurred with different statistical frequencies across all patterns (e.g. feature ‘A’ appeared in 10% of patterns, while feature ‘B’ appeared in 20% of patterns). During the test phase, participants completed 45 trials where they were tested on how well they learned the frequencies, by being asked which of 2, 3, or 4 features were more familiar to them. All participants completed the experiment via Qualtrics . They first consented to participate in the study and then provided brief demographic and professional practice information (examiners only), received instructions, and then completed the five visual comparison tasks and statistical learning tasks in a random order, followed by the intrinsic motivation task. Finally, participants were debriefed. Performance in each visual comparison task was measured using signal-detection measures of computed sensitivity and response bias ( d ′ and C ; Phillips et al., ; Stanislaw & Todorov, ). To calculate sensitivity and bias in visual comparison, we coded hits as correct judgements on match trials and false alarms as incorrect judgements on non-match trials (see Phillips et al., for further discussion on the use of signal-detection measure in forensic science decision-making). Higher d ′ values indicate higher sensitivity to the presence of a target stimulus independent of a tendency to respond ‘same’ or ‘different’ (response bias) and higher values are typically interpreted as higher ‘accuracy’ in a task. We also calculated participants’ criterion ( C )—a measure of tendency to respond ‘same’ or ‘different’. Intrinsic motivation scores were calculated by averaging participants’ Likert-scale responses on the effort, enjoyment , and perceived competence inventory sub-scales (including the reverse-scored items). Statistical learning scores were calculated by averaging the number of trials participants correctly chose the most frequent feature, where higher scores indicated better statistical learning. We compared visual comparison sensitivity between expert-domain examiners, non-expert-domain examiners, and novices. We pre-registered our intention to recruit a specific sample size of examiners ( n = 50 per group), but we did not reach this sample size during our pre-registered data collection period. We therefore adapted our analytical approach to maximise the number of participants in exploratory analyses by categorising each forensic examiner as either an expert domain examiner (e.g., a fingerprint examiner’s fingerprint comparison sensitivity) or a non-expert-domain examiner (e.g., a face or firearms examiner’s fingerprint sensitivity). For the exploratory group-level analyses reported in-text, we thus compared the sensitivity between expert-domain examiners, non-expert domain examiners, and novices. To account for unequal variances between groups, Welch corrections were applied using the t.test function in the base stats package in R to all follow-up comparisons (Delacre et al., ). For the exploratory individual differences analyses reported in-text, we calculated Pearson’s correlations using the base stats package in R to investigate the relationship between examiners’ expert domain visual comparison sensitivity, their aggregate non-expert-domain sensitivity, and their novel visual comparison sensitivity (e.g., Novel-Object Matching sensitivity). We also calculated exploratory Bayes correlations using the BayesFactor package in R to examine the likelihood of the data under the null hypothesis (i.e., absence of correlations) compared to an alternative hypothesis (Morey et al., ). In our study, we combined multiple test scores into aggregate scores for our correlational analyses. To standardise these scores, we used Z-score transformations of sensitivity ( d ′) scores. Specifically, we calculated these Z-scores based on the mean ( M ) and standard deviation (SD) of novice examiners’ sensitivity. This approach ensured that the examiners’ sensitivity scores were standardised relative to the normative performance of novices. As we were primarily interested in how individual differences were affected by expertise, we computed these correlations for forensic examiners only. Descriptive statistics The descriptive statistics for sensitivity of each group on each task can be seen in Table , and the psychometric properties of all tasks can be seen in Table in the “Appendix”. Exploratory group analyses Visual comparison tasks Analyses were conducted using the base stats package in R, and effect sizes (i.e., Cohen’s d) were calculated using the lsr package (Navarro, ). Face comparison : Face comparison sensitivity differed significantly between the three groups (see Panel A in Fig. ; F (2, 175) = 6.63, p = .002). Face examiners ( M = 2.19, SD = .37) outperformed novices ( M = 1.65, SD = .67) in face comparison ( t (25.28) = 4.39, p < .001, 95% CI [.29, .79], d = .84), as well as fingerprint and firearms examiners ( M = 1.91, SD = .57; t (23.81) = 2.28, p = .032, 95% CI [.03, .53], d = .51). Fingerprint and firearms examiners also significantly outperformed novices ( t (161.89) = 2.73, p = .007, 95% CI [.07, .45], d = .42). These results suggest that all examiners outperformed novices in face comparison, but face examiners outperformed fingerprint and firearms examiners. Fingerprint comparison : Fingerprint comparison sensitivity significantly differed between the three groups (see Panel B in Fig. ; F (2, 175) = 215.60, p < .001). Fingerprint examiners ( M = 2.91, SD = .61) outperformed novices ( M = .38, SD = .62) in fingerprint comparison ( t (80.50) = 22.36, p < .001, 95% CI [2.31, 2.76], .79], d = 4.13), as well as face and firearms examiners ( M = 1.60, SD = .81; t (77.65) = 8.47, p < .001, 95% CI [1.01, 1.63], d = 1.83). Face and firearms examiners also outperformed novices in fingerprint comparison ( t (65.25) = 8.74, p < .001, 95% CI [.94, 1.50], d = 1.78). These results suggest that all examiners outperformed novices in fingerprint comparison, but fingerprint examiners outperformed face and firearms examiners. Firearms comparison : Firearms comparison sensitivity significantly differed between the three groups (see Panel C in Fig. ; F (2, 175) = 33.88, p < .001). Firearms examiners ( M = 3.39, SD = .48) outperformed novices ( M = 2.31, SD = .94) in firearms comparison ( t (97.52) = 8.28, p < .001, 95% CI [.82, 1.34], d = 1.27), as well as face and fingerprint examiners ( M = 3.16, SD = .53; t (65.28) = 2.03, p = .047, 95% CI [.01, .46], d = .54). Face and fingerprint examiners also outperformed novices in firearms comparison ( t (145.85) = 7.06, p < .001, 95% CI [.61, 1.09], d = 1.05). These results suggest that all examiners outperformed novices in firearms comparison, but firearms examiners outperformed face and fingerprint examiners. Novel-object comparison: Novel-object comparison sensitivity significantly differed between groups, as all examiners combined ( M = 2.29, SD = .67) outperformed novices ( M = 1.41, SD = .64; t (173.04) = 8.93, p < .001, 95% CI [.69, 1.07], d = 1.34). These results suggest that examiners in all groups outperform novices in an entirely unfamiliar comparison task. Discriminant validity tasks Intrinsic motivation: Intrinsic motivation significantly differed between novices and all forensic examiners ( M = 4.80, SD = .89) had significantly lower intrinsic motivation than novices ( M = 5.15, SD = .89; t (174.67) = 2.64, p = .009, 95% CI [.09, .62], d = .40). These results suggest that examiners were less intrinsically motivated than novices during the study and suggest that examiners’ visual comparison advantage cannot be attributed to higher intrinsic motivation. Statistical learning: Statistical learning did not significantly differ between novices ( M = .48, SD = .19) and forensic examiners ( M = .53, SD = .22; t (167.49) = 1.46, p = .147, 95% CI [− .11, .02], d = .22). This suggests that the examiners’ advantage in visual comparison does not extend to statistical learning, providing evidence of divergent validity. Exploratory individual difference analyses Examiners’ visual comparison sensitivity outside their domain (i.e., non-expert-domain examiners) significantly correlated with their sensitivity in the novel comparison task ( r = .301, p = .005; see Fig. ). The observed Bayes Factor of 9.78 provided substantial evidence in favour of the observed correlation (Wetzels et al., ). Examiners’ sensitivity within and outside their domain did not significantly correlate with one another ( r =  − .09, p = .389, BF 10 = 0.35) and the observed Bayes Factor provided anecdotal evidence for the absence of a correlation. Sensitivity within examiners’ domain also did not significantly correlate with novel-object sensitivity ( r = .202, p = .064, BF 10 = 1.25) and the observed Bayes Factor provided anecdotal support for the presence of a correlation. Within-domain sensitivity was significantly correlated with intrinsic motivation ( r = .269, p = .013, BF 10 = 4.57), but no other correlations were significant. The descriptive statistics for sensitivity of each group on each task can be seen in Table , and the psychometric properties of all tasks can be seen in Table in the “Appendix”. Visual comparison tasks Analyses were conducted using the base stats package in R, and effect sizes (i.e., Cohen’s d) were calculated using the lsr package (Navarro, ). Face comparison : Face comparison sensitivity differed significantly between the three groups (see Panel A in Fig. ; F (2, 175) = 6.63, p = .002). Face examiners ( M = 2.19, SD = .37) outperformed novices ( M = 1.65, SD = .67) in face comparison ( t (25.28) = 4.39, p < .001, 95% CI [.29, .79], d = .84), as well as fingerprint and firearms examiners ( M = 1.91, SD = .57; t (23.81) = 2.28, p = .032, 95% CI [.03, .53], d = .51). Fingerprint and firearms examiners also significantly outperformed novices ( t (161.89) = 2.73, p = .007, 95% CI [.07, .45], d = .42). These results suggest that all examiners outperformed novices in face comparison, but face examiners outperformed fingerprint and firearms examiners. Fingerprint comparison : Fingerprint comparison sensitivity significantly differed between the three groups (see Panel B in Fig. ; F (2, 175) = 215.60, p < .001). Fingerprint examiners ( M = 2.91, SD = .61) outperformed novices ( M = .38, SD = .62) in fingerprint comparison ( t (80.50) = 22.36, p < .001, 95% CI [2.31, 2.76], .79], d = 4.13), as well as face and firearms examiners ( M = 1.60, SD = .81; t (77.65) = 8.47, p < .001, 95% CI [1.01, 1.63], d = 1.83). Face and firearms examiners also outperformed novices in fingerprint comparison ( t (65.25) = 8.74, p < .001, 95% CI [.94, 1.50], d = 1.78). These results suggest that all examiners outperformed novices in fingerprint comparison, but fingerprint examiners outperformed face and firearms examiners. Firearms comparison : Firearms comparison sensitivity significantly differed between the three groups (see Panel C in Fig. ; F (2, 175) = 33.88, p < .001). Firearms examiners ( M = 3.39, SD = .48) outperformed novices ( M = 2.31, SD = .94) in firearms comparison ( t (97.52) = 8.28, p < .001, 95% CI [.82, 1.34], d = 1.27), as well as face and fingerprint examiners ( M = 3.16, SD = .53; t (65.28) = 2.03, p = .047, 95% CI [.01, .46], d = .54). Face and fingerprint examiners also outperformed novices in firearms comparison ( t (145.85) = 7.06, p < .001, 95% CI [.61, 1.09], d = 1.05). These results suggest that all examiners outperformed novices in firearms comparison, but firearms examiners outperformed face and fingerprint examiners. Novel-object comparison: Novel-object comparison sensitivity significantly differed between groups, as all examiners combined ( M = 2.29, SD = .67) outperformed novices ( M = 1.41, SD = .64; t (173.04) = 8.93, p < .001, 95% CI [.69, 1.07], d = 1.34). These results suggest that examiners in all groups outperform novices in an entirely unfamiliar comparison task. Discriminant validity tasks Intrinsic motivation: Intrinsic motivation significantly differed between novices and all forensic examiners ( M = 4.80, SD = .89) had significantly lower intrinsic motivation than novices ( M = 5.15, SD = .89; t (174.67) = 2.64, p = .009, 95% CI [.09, .62], d = .40). These results suggest that examiners were less intrinsically motivated than novices during the study and suggest that examiners’ visual comparison advantage cannot be attributed to higher intrinsic motivation. Statistical learning: Statistical learning did not significantly differ between novices ( M = .48, SD = .19) and forensic examiners ( M = .53, SD = .22; t (167.49) = 1.46, p = .147, 95% CI [− .11, .02], d = .22). This suggests that the examiners’ advantage in visual comparison does not extend to statistical learning, providing evidence of divergent validity. Analyses were conducted using the base stats package in R, and effect sizes (i.e., Cohen’s d) were calculated using the lsr package (Navarro, ). Face comparison : Face comparison sensitivity differed significantly between the three groups (see Panel A in Fig. ; F (2, 175) = 6.63, p = .002). Face examiners ( M = 2.19, SD = .37) outperformed novices ( M = 1.65, SD = .67) in face comparison ( t (25.28) = 4.39, p < .001, 95% CI [.29, .79], d = .84), as well as fingerprint and firearms examiners ( M = 1.91, SD = .57; t (23.81) = 2.28, p = .032, 95% CI [.03, .53], d = .51). Fingerprint and firearms examiners also significantly outperformed novices ( t (161.89) = 2.73, p = .007, 95% CI [.07, .45], d = .42). These results suggest that all examiners outperformed novices in face comparison, but face examiners outperformed fingerprint and firearms examiners. Fingerprint comparison : Fingerprint comparison sensitivity significantly differed between the three groups (see Panel B in Fig. ; F (2, 175) = 215.60, p < .001). Fingerprint examiners ( M = 2.91, SD = .61) outperformed novices ( M = .38, SD = .62) in fingerprint comparison ( t (80.50) = 22.36, p < .001, 95% CI [2.31, 2.76], .79], d = 4.13), as well as face and firearms examiners ( M = 1.60, SD = .81; t (77.65) = 8.47, p < .001, 95% CI [1.01, 1.63], d = 1.83). Face and firearms examiners also outperformed novices in fingerprint comparison ( t (65.25) = 8.74, p < .001, 95% CI [.94, 1.50], d = 1.78). These results suggest that all examiners outperformed novices in fingerprint comparison, but fingerprint examiners outperformed face and firearms examiners. Firearms comparison : Firearms comparison sensitivity significantly differed between the three groups (see Panel C in Fig. ; F (2, 175) = 33.88, p < .001). Firearms examiners ( M = 3.39, SD = .48) outperformed novices ( M = 2.31, SD = .94) in firearms comparison ( t (97.52) = 8.28, p < .001, 95% CI [.82, 1.34], d = 1.27), as well as face and fingerprint examiners ( M = 3.16, SD = .53; t (65.28) = 2.03, p = .047, 95% CI [.01, .46], d = .54). Face and fingerprint examiners also outperformed novices in firearms comparison ( t (145.85) = 7.06, p < .001, 95% CI [.61, 1.09], d = 1.05). These results suggest that all examiners outperformed novices in firearms comparison, but firearms examiners outperformed face and fingerprint examiners. Novel-object comparison: Novel-object comparison sensitivity significantly differed between groups, as all examiners combined ( M = 2.29, SD = .67) outperformed novices ( M = 1.41, SD = .64; t (173.04) = 8.93, p < .001, 95% CI [.69, 1.07], d = 1.34). These results suggest that examiners in all groups outperform novices in an entirely unfamiliar comparison task. Intrinsic motivation: Intrinsic motivation significantly differed between novices and all forensic examiners ( M = 4.80, SD = .89) had significantly lower intrinsic motivation than novices ( M = 5.15, SD = .89; t (174.67) = 2.64, p = .009, 95% CI [.09, .62], d = .40). These results suggest that examiners were less intrinsically motivated than novices during the study and suggest that examiners’ visual comparison advantage cannot be attributed to higher intrinsic motivation. Statistical learning: Statistical learning did not significantly differ between novices ( M = .48, SD = .19) and forensic examiners ( M = .53, SD = .22; t (167.49) = 1.46, p = .147, 95% CI [− .11, .02], d = .22). This suggests that the examiners’ advantage in visual comparison does not extend to statistical learning, providing evidence of divergent validity. Examiners’ visual comparison sensitivity outside their domain (i.e., non-expert-domain examiners) significantly correlated with their sensitivity in the novel comparison task ( r = .301, p = .005; see Fig. ). The observed Bayes Factor of 9.78 provided substantial evidence in favour of the observed correlation (Wetzels et al., ). Examiners’ sensitivity within and outside their domain did not significantly correlate with one another ( r =  − .09, p = .389, BF 10 = 0.35) and the observed Bayes Factor provided anecdotal evidence for the absence of a correlation. Sensitivity within examiners’ domain also did not significantly correlate with novel-object sensitivity ( r = .202, p = .064, BF 10 = 1.25) and the observed Bayes Factor provided anecdotal support for the presence of a correlation. Within-domain sensitivity was significantly correlated with intrinsic motivation ( r = .269, p = .013, BF 10 = 4.57), but no other correlations were significant. We explored the relationship between domain-general visual comparison and expertise by comparing the expert-domain and non-expert-domain performance of forensic examiners from three disciplines (faces, fingerprints, and firearms). Examiners in all three disciplines had a distinct domain-specific advantage: within their own domain, examiners outperformed both novices and examiners outside their domain. That is, fingerprint examiners outperformed everyone in fingerprint comparison, firearms examiners outperformed everyone in firearms comparison, and face examiners outperformed everyone in face comparison. Forensic examiners’ perceptual expertise also generalised. Outside their own domain, examiners outperformed novices in visual comparison, both in forensic tasks outside their area of speciality and entirely novel stimuli. For example, fingerprint examiners outperformed novices in face and firearms comparison, despite these tasks being outside their area of expertise. This generalisation of perceptual expertise is consistent with other examples of generalisable expertise: trained musicians’ skill generalises to speech segmentation (François et al., ) and skilled athletes’ abilities generalise to other sports (e.g., baseball to cricket, see Moore & Müller, ; and hockey to soccer, see Smeeton et al., ). Yet, our results also revealed a paradox. We identified substantial support for a relationship between individual differences in visual comparison amongst experts outside of their domain (i.e., non-domain and novel-object comparison). This replicates the relationship seen in visual comparison where ability generalises across different tasks (Growns et al., , ; Phillips et al., ). While examiners were better than novices outside of their domain of expertise at a group level, examiners' expert domain performance was a poor predictor of individual differences in their ability outside their domain of expertise. We identified only anecdotal support for relationships between domain-specific skill and performance on both non-domain and novel-object comparison. So how can their expertise generalise if they do not have any shared variance? One potential explanation for this is that acquired expertise reduced the variability of expert-domain performance, reducing the power of its predictive validity by restricting the range of scores. Experience and training may disrupt the relationship between individual differences in these tasks amongst experts (Curby & Gauthier, ; Wong et al., ). The source of forensic examiners’ generalisable skill remains unclear. It is possible that forensic examiners self-select into professions they already possess innate aptitude (Growns et al., , ). Alternatively, visual comparison tasks may share core similarities that allow examiners to transfer strategies they learn in one discipline to tasks in other disciplines. Forensic examiners may develop specialised information processing strategies that facilitate both their superior performance and their skill generalisation. Supporting this, fingerprint examiners’ eye-gaze patterns are more consistent with one another than novices’ (Busey et al., ). If visual comparison tasks do share core similarities, examiners may harness specialised information processing techniques to excel in tasks outside their domain (see also Dunn et al., ). Future research could utilise eye-tracking methodology to investigate how forensic examiners sample information during domain-specific and domain-general visual comparison (Brams et al., ). These results also add to growing evidence of a domain-general visual comparison ability (Growns et al., ; Phillips et al., ). Yet, the psychological mechanisms underpinning visual comparison are only beginning to emerge. Several cognitive processes have been implicated in forensic expertise, including domain-specific statistical learning (Busey et al., ; Growns & Martire, ; Growns et al., ; Martire et al., ), domain-specific visual search (Robson et al., ; Searston & Tangen, ; Thielgen et al., ), featural processing (Thompson & Tangen, ; Towler et al., ; White et al., ), and memory retention (Busey & Vanderkolk, ; Thompson & Tangen, ; for review see Growns & Martire, ; Growns & Neal, ). Yet, the relative contribution of these cognitive processes in both the development of expertise and generalisable visual comparison skill is not yet known. Future research should investigate the shared cognitive and perceptual mechanisms underpinning this skill. Together, these results highlight both the domain-specific and domain-general nature of forensic feature-comparison expertise. Consistent with contemporary models of expertise (Ericsson, ; Ericsson et al., ; Keith & Ericsson, ), forensic examiners displayed perceptual expertise within their own domain of training and experience. Examiners may learn key domain-relevant information that contributes to this superior domain-specific skill—for example, diagnostic information key to specific stimuli (Growns & Martire, ; Growns et al., , ; Martire et al., ; Towler et al., ). These results also have important applied implications. Forensic examiners possess a key capability that could generalise to performance advantages outside their key domain of expertise. However, our data clearly show that domain-specific skill lends the greatest performance boost. Thus, it would be imprudent to recommend that examiners practice outside the discipline that they are trained in—particularly given the high-stakes nature of their judgements within the criminal justice system. Further, our individual difference results suggest that one important way professional performance in forensic science could be improved is by recruiting new forensic trainees based on innate talent. Some forensic organisations have already begun to use screening tests to identify and recruit top-performers in face recognition to work in forensic roles involving face comparison (Dunn et al., ; Nador et al., ; Robertson et al., ; White et al., ). It is important to note that further research is vital to understanding the generalisation of perceptual expertise in forensic feature comparison. We were not able to recruit our intended sample size of experts in this study—something that is not uncommon in many studies recruiting specialist or expert populations (Martire & Kemp, ; Shen et al., ). We thus adapted our pre-registered analysis plan, and the data in this study should thus be interpreted with caution. Nevertheless, it is important to note that the number of experts recruited in this study was comparable to other research recruiting forensic examiners (Busey & Vanderkolk, ; Growns & Martire, ; Growns et al., ; Martire et al., ). Another important factor to consider is potential differences in motivation between groups and between tasks. While we attempted to control for differences in motivation by rewarding top-performers in all tasks and do not believe this meaningfully impacted our pattern of results as novices were not aware that examiners participated in the study (Ma et al., ), it is still possible that examiners were more motivated within their expert-domain tasks than others. Although research has only investigated the impact of motivation on visual comparison performance in novices (Moore & Johnston, ), future research should examine how motivation is shaped by expertise. This study offers novel evidence of the domain-specific and domain-general nature of the perceptual expertise of forensic feature-comparison examiners. Face, fingerprint, and firearms examiners outperform all others within their domain of expertise, but all examiners outperform novices in tasks outside their usual discipline. These results have both theoretical implications about the domain-general nature of perceptual expertise, as well as important applied implications for decision making in forensic science.
Catheterization Techniques for Anomalous Aortic Origin of Coronary Arteries
0ccdd353-8a5c-4a91-850e-da5f4e9b2671
11874055
Surgical Procedures, Operative[mh]
Introduction Anomalous aortic origin of a coronary artery (AAOCA) represents a rare congenital anomaly, with an angiography prevalence of 0.8% . Diagnosing AAOCA in adults involves different modalities such as coronary computed tomography angiography (CCTA), invasive coronary angiography (ICA), or less commonly, echocardiography. Encountering an unknown AAOCA poses challenges for interventional cardiologists, affecting catheter choice, engagement maneuvers, contrast media administration, and percutaneous coronary intervention procedures . The complexity of AAOCA catheterization not only extends procedure duration and contrast medium use, but also may delay intervention in acute coronary syndromes. While ICA offers 2‐dimensional imaging, it often falls short in accurately analyzing certain AAOCA. Interventional cardiologists can leverage CCTA's 3‐dimensional visualization of cardiac structures to enhance procedural planning and catheterization. Literature on the AAOCA catheterization techniques remains limited . This review aims to address common AAOCA scenarios through an artery‐by‐artery approach. The first part outlines strategies to improve diagnostic catheterization success rates. In the second part, the role of intracoronary imaging and coronary physiology assessment during catheterization is discussed. Additionally, the contribution of CCTA in AAOCA diagnosis and work‐up, is presented. This review will focus on adult population. General Diagnostic Catheterization Strategy 2.1 Identifying AAOCA Origin Switching to the contralateral vessel is recommended if the coronary artery cannot be visualized in its usual location. Many cases of AAOCA arise from the contralateral artery or sinus. If the origin is near the contralateral ostium, a highly selective catheter engagement may miss an AAOCA (Figure ). An aortic root angiogram can provide valuable information about coronary artery origins and catheter selection. While radial access is the preferred route for ICA, a femoral approach may be necessary in cases of hostile brachial access. Using guide catheters instead of diagnostic catheters can improve stability and facilitate the use of guidewires or guide extension catheters. If the ectopic artery remains elusive despite standard angiographic attempts, patients without ST elevation myocardial infarction should avoid prolonged procedures and consider scheduling a CCTA. While modified guide catheters have been developed for specific AAOCA cases , their availability varies by country. Therefore, the strategies described below are based on the use of standard catheters, which can be adapted through manual maneuvers to change the distal tip orientation . Additionally, CCTA imaging, sometimes with inverted views as seen by the operator, were added to provide valuable insights into coronary anatomy and catheter maneuvers. The Central Illustration summarizes the suggested catheters according to the site of origin of each AAOCA. 2.2 Considerations for AAOCA Ectopic Course Familiarity with various ectopic courses associated with AAOCA is crucial to avoid misdiagnosis. Four primary ectopic courses based on their relationships to the great vessels have been identified: prepulmonic, subpulmonic, interarterial, and retroaortic . The frequency of each ectopic course varies depending on the type of coronary artery involved. Generally, the circumflex (Cx) artery, right coronary artery (RCA), and left main (LM) or left anterior descending (LAD) artery account for approximately 50%, 30%, and 20% of AAOCA angiographic cases, respectively . AAOCA must be categorized based on their associated risks. Those with an interarterial course are identified as having a risk of myocardial ischemia, sudden cardiac death (SCD) or aborted cardiac arrest . Few AAOCA with subpulmonic course are associated with potential myocardial ischemia . Identifying AAOCA Origin Switching to the contralateral vessel is recommended if the coronary artery cannot be visualized in its usual location. Many cases of AAOCA arise from the contralateral artery or sinus. If the origin is near the contralateral ostium, a highly selective catheter engagement may miss an AAOCA (Figure ). An aortic root angiogram can provide valuable information about coronary artery origins and catheter selection. While radial access is the preferred route for ICA, a femoral approach may be necessary in cases of hostile brachial access. Using guide catheters instead of diagnostic catheters can improve stability and facilitate the use of guidewires or guide extension catheters. If the ectopic artery remains elusive despite standard angiographic attempts, patients without ST elevation myocardial infarction should avoid prolonged procedures and consider scheduling a CCTA. While modified guide catheters have been developed for specific AAOCA cases , their availability varies by country. Therefore, the strategies described below are based on the use of standard catheters, which can be adapted through manual maneuvers to change the distal tip orientation . Additionally, CCTA imaging, sometimes with inverted views as seen by the operator, were added to provide valuable insights into coronary anatomy and catheter maneuvers. The Central Illustration summarizes the suggested catheters according to the site of origin of each AAOCA. Considerations for AAOCA Ectopic Course Familiarity with various ectopic courses associated with AAOCA is crucial to avoid misdiagnosis. Four primary ectopic courses based on their relationships to the great vessels have been identified: prepulmonic, subpulmonic, interarterial, and retroaortic . The frequency of each ectopic course varies depending on the type of coronary artery involved. Generally, the circumflex (Cx) artery, right coronary artery (RCA), and left main (LM) or left anterior descending (LAD) artery account for approximately 50%, 30%, and 20% of AAOCA angiographic cases, respectively . AAOCA must be categorized based on their associated risks. Those with an interarterial course are identified as having a risk of myocardial ischemia, sudden cardiac death (SCD) or aborted cardiac arrest . Few AAOCA with subpulmonic course are associated with potential myocardial ischemia . Artery‐Specific Catheterization Strategy 3.1 Anomalous Origin of the Cx Artery This AAOCA is the most frequent, with an angiographic prevalence of 5/1000 . Its anatomic phenotype is well‐established, typically originating in the RCA or right sinus, almost always associated with a retroaortic course . The origin of a Cx artery arising from the RCA is in close proximity to the right ostium (Figure ). Engagement of the Judkins right (JR) 4 catheter in the RCA, coupled with a clockwise maneuver and slow withdrawal, is recommended. Alternatively, Amplatz right (AR) 1 or 2 catheters may be used. If necessary, placement of a 0.014‐in. guidewire in the distal RCA can stabilize the catheter. An anomalous Cx artery origin in the right sinus lies near, below, and to the right of the RCA ostium. Engaging the catheter in the RCA and slowly withdrawing it from the RCA ostium with a clockwise rotation is optimal (Figure ). Catheters such as JR 4, AR 1 or 2, Amplatz left (AL), or Multipurpose (MP) should be used. The retroaortic course is characterized by a long path with a marked downwards concave curve, typically visible in left anterior oblique (LAO) or right anterior oblique (RAO) incidence. During the left ventriculography in RAO incidence, the “dot‐sign” can be observed, that is, the mid ectopic CX artery appears as a dot behind the aorta and below the RCA There is speculation that the retroaortic course is associated with a higher incidence of coronary artery disease (CAD) compared to other ectopic courses . 3.2 Anomalous Origin of the RCA The prevalence of this AAOCA is close to 3/1000 in the general population . The most common anatomical phenotype involves an anomalous origin within the left sinus, typically near the left ostium (Figure ). In over 90% of cases, the ectopic course is interarterial . Rare alternative courses include a retroaortic course in cases of origin within the non‐coronary sinus, or a prepulmonic course in cases of origin within the mid LAD artery (Figure ). A high take‐off from the ascending aorta (> 5 mm above the sinotubular junction) can be observed (Figure ). In such cases, the initial course may be normal (ectopic ostium above the appropriate sinus) or interarterial (ectopic ostium above the contralateral sinus). An exceptional anomaly is a single coronary artery (Figure ) with a single left ostium and retrograde filling of the RCA by the left network . Catheterization of a right AAOCA is always challenging due to the tangential aortic pathway often associated with an intramural aortic passage . This can result in coronary morphological deformation, particularly with an ostial slit‐like shape, which makes selective engagement rare. To increase the success rate of visualizing a right AAOCA, the recommended maneuvers are described in Figure . An Extra Back‐Up (EBU) 3.5 (or equivalent) is often the first choice. From the left ostium a clockwise rotation is optimal to engage the right ostium (Figure ). A guide extension catheter can be utilized to improve opacification of the right AAOCA (Figure ). Alternatively, AL 0.75, 1, 2, or XB 3, or Contralateral Support 3 guide catheters can be used. An intramural aortic passage can be suspected based on angiographic views, characterized by an enlarged and hypodense appearance in the initial arterial millimeters in LAO incidence, and a flute mouthpiece shape in RAO incidence (Figure ). A lack of intramural aortic passage is characterized by a mild narrowing in RAO incidence (Figure ). A remarkable finding is the rare occurrence of RCA originating in the proximal LM artery, always associated with an interarterial course (Figure ). In the case of anomalous high take‐off from the aorta, the use of AL or MP catheters is advised, employing techniques similar to those used for saphenous vein graft engagement. A common observation is the funnel‐like appearance observed between the aorta and the ectopic coronary artery (Figure ). 3.3 Anomalous Origin of LM or LAD Artery LM and LAD artery encompasses a wide spectrum of anatomical variations, with a global angiographic prevalence estimated at 2/1000 cases . The ectopic ostium can be situated in the proximal RCA, the right sinus, or the ascending aorta. Four ectopic courses have been identified: prepulmonic, subpulmonic, interarterial, and retroaortic (Figure ). Notably, the subpulmonic course is the most frequently encountered in the adult population, while the interarterial course is relatively rare . In instances of a prepulmonic course, the LM or LAD artery arises from the proximal RCA or the right sinus, either adjacent to, above, or at the level of the right ostium, and to the left of it. Conversely, in cases of a subpulmonic course, the LM or LAD artery originates from the proximal RCA or the right sinus, typically below or at the level of the right ostium, and to the left of it. In both scenarios, when the JR 4 catheter is engaged in the RCA, it should be gently pulled back while executing a counterclockwise maneuver (Figure ). In cases of a retroaortic course, the LM or LAD artery arises from the proximal RCA or the right sinus, positioned near or below the right ostium and to the right of it. To engage the LM or LAD artery with a retroaortic course, a slow withdrawal of the JR 4 catheter engaged in the RCA is essential, accompanied by a clockwise maneuver (Figure ). Occasionally, an MP catheter can be used for LM or LAD artery with a subpulmonic or retroaortic course. An EBU guide catheter may be more suitable for LM or LAD artery with a prepulmonic course. Selective engagement of a LM with an interarterial course may be facilitated due to a less deformed ostium compared to a RCA. Typically, a JL 3.5 catheter or EBU 3.5 guide catheter suffices (Figure ). An intramural aortic passage may be present solely on the mid part of the ectopic LM. To engage the LM with a JR 4 catheter, a counterclockwise maneuver is required after withdrawal from the right ostium. With experience, different ectopic courses of the LM or LAD AAOCA can be discerned through angiography . In LAO incidence with a subpulmonic or retroaortic course, a long path with a downwards concave marked curve may be observed (Figure ). Identification of a septal artery in the ectopic segment indicates a subpulmonic course, distinguishing it from a retroaortic course (Figure ). In RAO view, the eye sign formed by the ectopic LM and Cx artery is indicative of a left AAOCA with a prepulmonic or retropulmonic course (Figure ). In RAO incidence with a prepulmonic or interarterial course, a path with an upwards convex marked curve may be observed (Figure ). Compared to the interarterial course, the path of a prepulmonic course is typically longer and more meandering, while an interarterial course is generally associated with a straight course. For anomalous high take‐off from the aorta, AL or MP catheters are necessary. Multiple ectopic connections may coexist in a single patient, necessitating the use of various catheters (Figure ). 3.4 Other Anomalies Several other uncommon AAOCA have been documented. A single coronary artery represents a distinct abnormality previously described, which should be distinguished from other AAOCA with a single ostium . An abnormal coronary origin in the appropriate sinus, but very eccentric, may include an RCA ostium located near the right‐left commissure, or a LM ostium situated near the left‐non‐coronary commissure. In such cases, the RCA may exhibit a short interarterial course, while the LM artery may demonstrate a short retroaortic course. Achieving selective engagement in these locations might prove challenging, often necessitating the use of non‐standard diagnostic catheters such as AL, AR, MP, or 3DRC. A detailed plan may not always be applicable to this type of abnormality. Instances where the LM or RCA arises from the non‐coronary sinus, typically associated with a more or less long retroaortic course, are also observed (Figure ). Standard diagnostic catheters can be employed in such cases, with a counterclockwise maneuver often required to move the catheter tip posteriorly. Differentiate between an ostial hypoplasia/atresia and an ostial occlusion related to CAD can be challenging. The presence of an ostial stump, a non‐tortuous collateral network, and the absence of CAD typically align with the definition of coronary ostial hypoplasia or atresia. Nevertheless, this is often not possible by ICA alone and requires a further evaluation by CCTA. Anomalous Origin of the Cx Artery This AAOCA is the most frequent, with an angiographic prevalence of 5/1000 . Its anatomic phenotype is well‐established, typically originating in the RCA or right sinus, almost always associated with a retroaortic course . The origin of a Cx artery arising from the RCA is in close proximity to the right ostium (Figure ). Engagement of the Judkins right (JR) 4 catheter in the RCA, coupled with a clockwise maneuver and slow withdrawal, is recommended. Alternatively, Amplatz right (AR) 1 or 2 catheters may be used. If necessary, placement of a 0.014‐in. guidewire in the distal RCA can stabilize the catheter. An anomalous Cx artery origin in the right sinus lies near, below, and to the right of the RCA ostium. Engaging the catheter in the RCA and slowly withdrawing it from the RCA ostium with a clockwise rotation is optimal (Figure ). Catheters such as JR 4, AR 1 or 2, Amplatz left (AL), or Multipurpose (MP) should be used. The retroaortic course is characterized by a long path with a marked downwards concave curve, typically visible in left anterior oblique (LAO) or right anterior oblique (RAO) incidence. During the left ventriculography in RAO incidence, the “dot‐sign” can be observed, that is, the mid ectopic CX artery appears as a dot behind the aorta and below the RCA There is speculation that the retroaortic course is associated with a higher incidence of coronary artery disease (CAD) compared to other ectopic courses . Anomalous Origin of the RCA The prevalence of this AAOCA is close to 3/1000 in the general population . The most common anatomical phenotype involves an anomalous origin within the left sinus, typically near the left ostium (Figure ). In over 90% of cases, the ectopic course is interarterial . Rare alternative courses include a retroaortic course in cases of origin within the non‐coronary sinus, or a prepulmonic course in cases of origin within the mid LAD artery (Figure ). A high take‐off from the ascending aorta (> 5 mm above the sinotubular junction) can be observed (Figure ). In such cases, the initial course may be normal (ectopic ostium above the appropriate sinus) or interarterial (ectopic ostium above the contralateral sinus). An exceptional anomaly is a single coronary artery (Figure ) with a single left ostium and retrograde filling of the RCA by the left network . Catheterization of a right AAOCA is always challenging due to the tangential aortic pathway often associated with an intramural aortic passage . This can result in coronary morphological deformation, particularly with an ostial slit‐like shape, which makes selective engagement rare. To increase the success rate of visualizing a right AAOCA, the recommended maneuvers are described in Figure . An Extra Back‐Up (EBU) 3.5 (or equivalent) is often the first choice. From the left ostium a clockwise rotation is optimal to engage the right ostium (Figure ). A guide extension catheter can be utilized to improve opacification of the right AAOCA (Figure ). Alternatively, AL 0.75, 1, 2, or XB 3, or Contralateral Support 3 guide catheters can be used. An intramural aortic passage can be suspected based on angiographic views, characterized by an enlarged and hypodense appearance in the initial arterial millimeters in LAO incidence, and a flute mouthpiece shape in RAO incidence (Figure ). A lack of intramural aortic passage is characterized by a mild narrowing in RAO incidence (Figure ). A remarkable finding is the rare occurrence of RCA originating in the proximal LM artery, always associated with an interarterial course (Figure ). In the case of anomalous high take‐off from the aorta, the use of AL or MP catheters is advised, employing techniques similar to those used for saphenous vein graft engagement. A common observation is the funnel‐like appearance observed between the aorta and the ectopic coronary artery (Figure ). Anomalous Origin of LM or LAD Artery LM and LAD artery encompasses a wide spectrum of anatomical variations, with a global angiographic prevalence estimated at 2/1000 cases . The ectopic ostium can be situated in the proximal RCA, the right sinus, or the ascending aorta. Four ectopic courses have been identified: prepulmonic, subpulmonic, interarterial, and retroaortic (Figure ). Notably, the subpulmonic course is the most frequently encountered in the adult population, while the interarterial course is relatively rare . In instances of a prepulmonic course, the LM or LAD artery arises from the proximal RCA or the right sinus, either adjacent to, above, or at the level of the right ostium, and to the left of it. Conversely, in cases of a subpulmonic course, the LM or LAD artery originates from the proximal RCA or the right sinus, typically below or at the level of the right ostium, and to the left of it. In both scenarios, when the JR 4 catheter is engaged in the RCA, it should be gently pulled back while executing a counterclockwise maneuver (Figure ). In cases of a retroaortic course, the LM or LAD artery arises from the proximal RCA or the right sinus, positioned near or below the right ostium and to the right of it. To engage the LM or LAD artery with a retroaortic course, a slow withdrawal of the JR 4 catheter engaged in the RCA is essential, accompanied by a clockwise maneuver (Figure ). Occasionally, an MP catheter can be used for LM or LAD artery with a subpulmonic or retroaortic course. An EBU guide catheter may be more suitable for LM or LAD artery with a prepulmonic course. Selective engagement of a LM with an interarterial course may be facilitated due to a less deformed ostium compared to a RCA. Typically, a JL 3.5 catheter or EBU 3.5 guide catheter suffices (Figure ). An intramural aortic passage may be present solely on the mid part of the ectopic LM. To engage the LM with a JR 4 catheter, a counterclockwise maneuver is required after withdrawal from the right ostium. With experience, different ectopic courses of the LM or LAD AAOCA can be discerned through angiography . In LAO incidence with a subpulmonic or retroaortic course, a long path with a downwards concave marked curve may be observed (Figure ). Identification of a septal artery in the ectopic segment indicates a subpulmonic course, distinguishing it from a retroaortic course (Figure ). In RAO view, the eye sign formed by the ectopic LM and Cx artery is indicative of a left AAOCA with a prepulmonic or retropulmonic course (Figure ). In RAO incidence with a prepulmonic or interarterial course, a path with an upwards convex marked curve may be observed (Figure ). Compared to the interarterial course, the path of a prepulmonic course is typically longer and more meandering, while an interarterial course is generally associated with a straight course. For anomalous high take‐off from the aorta, AL or MP catheters are necessary. Multiple ectopic connections may coexist in a single patient, necessitating the use of various catheters (Figure ). Other Anomalies Several other uncommon AAOCA have been documented. A single coronary artery represents a distinct abnormality previously described, which should be distinguished from other AAOCA with a single ostium . An abnormal coronary origin in the appropriate sinus, but very eccentric, may include an RCA ostium located near the right‐left commissure, or a LM ostium situated near the left‐non‐coronary commissure. In such cases, the RCA may exhibit a short interarterial course, while the LM artery may demonstrate a short retroaortic course. Achieving selective engagement in these locations might prove challenging, often necessitating the use of non‐standard diagnostic catheters such as AL, AR, MP, or 3DRC. A detailed plan may not always be applicable to this type of abnormality. Instances where the LM or RCA arises from the non‐coronary sinus, typically associated with a more or less long retroaortic course, are also observed (Figure ). Standard diagnostic catheters can be employed in such cases, with a counterclockwise maneuver often required to move the catheter tip posteriorly. Differentiate between an ostial hypoplasia/atresia and an ostial occlusion related to CAD can be challenging. The presence of an ostial stump, a non‐tortuous collateral network, and the absence of CAD typically align with the definition of coronary ostial hypoplasia or atresia. Nevertheless, this is often not possible by ICA alone and requires a further evaluation by CCTA. Transcatheter Aortic Valve Implantation (TAVI) and AAOCA With the development of TAVI, operators have become aware of the risk of external compression of AAOCA during valve expansion . The retroaortic course, mainly observed with LM and Cx artery, is particularly at risk due to its close anatomical relationship with the aortic valve annulus (Figure ). The risk of extrinsic compression can occur at the middle part of the ectopic course. Before TAVI, balloon aortic valvuloplasty with simultaneous coronary injection can be performed to assess tolerance. During the TAVI procedure, placement of a guidewire and an unexpanded stent in the ectopic coronary artery is advised for AAOCA at risk of compression (Figure ). The TAVI procedure should be carried out as usual practice for other AAOCA. Evaluation of Coronary Morphology and Physiology Approximately one‐third of AAOCA, predominantly right AAOCA, diagnosed in the adult population are deemed at risk of myocardial ischemia or SCD. Although rare, left AAOCA with a subpulmonic course, especially when associated with coronary hypoplasia (Figure ) or deep intramyocardial passage, have been linked to myocardial ischemia. The absolute annual risk of SCD is exceedingly low in patients with AAOCA, however poorly known and estimated at 0.02%−0.05% and 0.1%−0.2% for right and left AAOCA with an interarterial course, respectively . Furthermore, this risk is predominantly observed in athletic populations and decreases significantly after the age of 35 years . Conducting individual risk assessments can be challenging, particularly when AAOCA is incidentally discovered. In addition to the information provided by CCTA, supplementary morphological and physiological evaluations during cardiac catheterization may prove beneficial for informed decision‐making. 5.1 Morphologic Evaluation The invasive morphological evaluation of AAOCA relies on intracoronary imaging techniques, that is, optical coherence tomography (OCT) or intravascular ultrasound (IVUS). OCT has a higher axial resolution and pullback speed compared to IVUS. However, the latter remains the gold standard for assessing AAOCA with an interarterial course . IVUS allows simultaneous visualization of the arterial lumen, arterial wall, and adjacent structures. Certain probes equipped with the ChromaFlo function facilitate accurate analysis of the anatomical relationships of an interarterial course with the aorta and pulmonary artery. This type of imaging also aids in understanding the morphological adaptations of an interarterial course, where the ectopic coronary artery must navigate a constrained space between the arterial trunks, smaller than the normal arterial diameter. As the artery traverses behind the pulmonic structures to reach the aorta, the arterial surface area diminishes, transitioning first to an oval shape with a ratio (degree of eccentricity) between the long and short axis < 2.0. Then an intramural aortic passage leads to an ellipsoid arterial deformation with a ratio between the long and short axis ≥ 2.0. The ostium then has a slit‐like shape (Figure ). IVUS allows a quantitative assessment of an AAOCA with an interarterial course (Figure ). The absence of perivascular cell density, resulting from the lack of adventitia, and the segmental disappearance of the typical 3‐layered aspect may also suggest an intramural passage. In resting conditions, IVUS imaging may reveal a dynamic area narrowing (approximately of 5%−10% in systole) of the intramural segment due to aortic expansion . Conversely, such dynamic changes are generally absent in AAOCA without an intramural aortic passage . Unlike OCT, IVUS allows for manual pullback, specifically targeting the ectopic ostium, making it the preferred method for ruling in or ruling out an intramural aortic passage (Figure ). Insufficient flushing of the blood vessel, which is not uncommon during AAOCA evaluation, may result in poor quality OCT images. Nevertheless, it can be used to evaluate left AAOCA with an interarterial course, as the ostium is often oval, allowing selective injections (Figure ). OCT also remains valuable for assessing intramyocardial segments often found in AAOCA with a subpulmonic course, where a small or moderate reduction in diameter and lumen area may be observed (Figure ). 5.2 Physiological Assessment The challenge of inducing myocardial ischemia with non‐invasive tests is well recognized, even in high‐risk individuals with symptomatic AAOCA. The limited reduction in luminal area, typically not exceeding 70%, may account for the high incidence of negative results in non‐invasive tests. A two‐tier concept has been proposed to elucidate the occurrence of myocardial ischemia . This concept delineates a dynamic component in addition to the fixed component. The latter refers to a similar physiological behavior as the atherosclerotic stenotic lesions. Microvascular dilation induced by adenosine predominantly evaluates the fixed component of narrowing. Physiological measurements utilizing a pressure guidewire during pharmacological vasodilation‐induced hyperemia reveal a fractional flow reserve (FFR) value generally ranging between 0.80 and 0.90 when the decrease in luminal diameter exceeds 50%, but rarely falling below 0.80 (Figure ). While the physiologic assessment AAOCA via invasive methods has yet to be fully validated, an uncertain threshold value persists. Because the administration of a vasodilator via non‐selective catheterization may be incomplete, intravenous administration can potentially overcome this limitation. It has been suggested that inotropic stress induced by dobutamine is more adept at evaluating the dynamic component of narrowing. A compression of the intramural segment can be induced, resulting in decreased luminal area and increased vascular resistance, ultimately leading to impairment of coronary flow reserve. To enhance the effects of a dobutamine stepwise infusion up to 40 µg/kg/min, atropine (0.5−1 mg intravenously) and volume expansion (up to 3 L of saline) can be added, to counteract the side effect of dobutamine to decrease the cardiac preload, and thus the blood pressure . This protocol, however, may not be applicable for every patient and cannot fully replicate vigorous and prolonged physical exercise. Nevertheless, the use of dobutamine slightly increases the incidence of positive invasive FFR results compared to adenosine. IVUS imaging can be employed at rest and during dobutamine infusion, with short round trips to identify increased dynamic compression. Resting indices measurement, such as instantaneous wave‐free ratio or iFR, resting full‐cycle ratio or RFR, or diastolic hyperemia‐free ratio or DFR, have not been extensively evaluated yet. Additionally, angiography‐derived FFR calculation can be performed using quantitative flow ratio (QFR). Even though this technology is not validated for ostial lesions, a recent study has shown that a non‐significant QFR value in RCA with interarterial course was associated with a good clinical outcome at 5 years . QFR calculation utilizing the Medis QFR software is based on 3‐dimensional quantitative coronary angiography and computational fluid dynamics, employing contrast medium progression in the coronary artery (Figure ). Beyond AAOCA with an interarterial course, invasive evaluation of certain AAOCA with a subpulmonic course may be beneficial in cases of ischemic symptoms and/or non‐invasive functional testing‐induced ischemia (Figure ). A decrease in luminal area due to a deep intramyocardial passage can result in a positive invasive test. 5.3 CCTA CCTA has emerged as the primary imaging modality for visualizing the origin and initial course of AAOCA . Except an abnormal CX artery origin, CCTA is recommended following the discovery of AAOCA with ICA in the adult population. Utilizing volume rendering with 3‐dimensional images facilitates the identification of ectopic courses, and clarifies their relationships with the great vessels when ICA 2‐dimensional imaging remains ambiguous (Figure ). It is crucial to distinguish between a subpulmonic or interarterial course (Figure ) to avoid inappropriate decision‐making regarding a left AAOCA. A more (> 2 mm) or less deep intramyocardial passage can be observed in left AAOCO with a subpulmonic course (Figure ). A reduction of diameter and lumen area is often noticed in cases where an intramyocardial passage is present (Figure ). Multiplanar reformatted images play a main role in evaluating anatomical features of AAOCA with interarterial course, such as luminal shape deformation, vessel narrowing (both diameter and area), and take‐off angle (Figure ). The diagnosis of intramural aortic passage generally relies on the association of the following criteria: degree of eccentricity ≥ 2.0, lumen diameter reduction ≥ 50%, and acute take‐off angle (< 45°). CCTA serves as the cornerstone of multimodality imaging, significantly contributing to the evaluation of individuals with AAOCA and guiding patient management . The identification of an intramural aortic passage is crucial for ensuring appropriate management of AAOCA with an interarterial course. CCTA can be extended beyond the scope of anatomical evaluation. Computational fluid dynamics simulations, such as those offered by HeartFlow Inc. (Redwood City, California), now provide physiological assessment (Figure ) through FFR measurement (FFR CT ) . However, it's important to note that while this non‐invasive approach offers valuable insights, it may not fully capture the spectrum of pathophysiologic changes observed in AAOCA during a strenuous physical exertion . Morphologic Evaluation The invasive morphological evaluation of AAOCA relies on intracoronary imaging techniques, that is, optical coherence tomography (OCT) or intravascular ultrasound (IVUS). OCT has a higher axial resolution and pullback speed compared to IVUS. However, the latter remains the gold standard for assessing AAOCA with an interarterial course . IVUS allows simultaneous visualization of the arterial lumen, arterial wall, and adjacent structures. Certain probes equipped with the ChromaFlo function facilitate accurate analysis of the anatomical relationships of an interarterial course with the aorta and pulmonary artery. This type of imaging also aids in understanding the morphological adaptations of an interarterial course, where the ectopic coronary artery must navigate a constrained space between the arterial trunks, smaller than the normal arterial diameter. As the artery traverses behind the pulmonic structures to reach the aorta, the arterial surface area diminishes, transitioning first to an oval shape with a ratio (degree of eccentricity) between the long and short axis < 2.0. Then an intramural aortic passage leads to an ellipsoid arterial deformation with a ratio between the long and short axis ≥ 2.0. The ostium then has a slit‐like shape (Figure ). IVUS allows a quantitative assessment of an AAOCA with an interarterial course (Figure ). The absence of perivascular cell density, resulting from the lack of adventitia, and the segmental disappearance of the typical 3‐layered aspect may also suggest an intramural passage. In resting conditions, IVUS imaging may reveal a dynamic area narrowing (approximately of 5%−10% in systole) of the intramural segment due to aortic expansion . Conversely, such dynamic changes are generally absent in AAOCA without an intramural aortic passage . Unlike OCT, IVUS allows for manual pullback, specifically targeting the ectopic ostium, making it the preferred method for ruling in or ruling out an intramural aortic passage (Figure ). Insufficient flushing of the blood vessel, which is not uncommon during AAOCA evaluation, may result in poor quality OCT images. Nevertheless, it can be used to evaluate left AAOCA with an interarterial course, as the ostium is often oval, allowing selective injections (Figure ). OCT also remains valuable for assessing intramyocardial segments often found in AAOCA with a subpulmonic course, where a small or moderate reduction in diameter and lumen area may be observed (Figure ). Physiological Assessment The challenge of inducing myocardial ischemia with non‐invasive tests is well recognized, even in high‐risk individuals with symptomatic AAOCA. The limited reduction in luminal area, typically not exceeding 70%, may account for the high incidence of negative results in non‐invasive tests. A two‐tier concept has been proposed to elucidate the occurrence of myocardial ischemia . This concept delineates a dynamic component in addition to the fixed component. The latter refers to a similar physiological behavior as the atherosclerotic stenotic lesions. Microvascular dilation induced by adenosine predominantly evaluates the fixed component of narrowing. Physiological measurements utilizing a pressure guidewire during pharmacological vasodilation‐induced hyperemia reveal a fractional flow reserve (FFR) value generally ranging between 0.80 and 0.90 when the decrease in luminal diameter exceeds 50%, but rarely falling below 0.80 (Figure ). While the physiologic assessment AAOCA via invasive methods has yet to be fully validated, an uncertain threshold value persists. Because the administration of a vasodilator via non‐selective catheterization may be incomplete, intravenous administration can potentially overcome this limitation. It has been suggested that inotropic stress induced by dobutamine is more adept at evaluating the dynamic component of narrowing. A compression of the intramural segment can be induced, resulting in decreased luminal area and increased vascular resistance, ultimately leading to impairment of coronary flow reserve. To enhance the effects of a dobutamine stepwise infusion up to 40 µg/kg/min, atropine (0.5−1 mg intravenously) and volume expansion (up to 3 L of saline) can be added, to counteract the side effect of dobutamine to decrease the cardiac preload, and thus the blood pressure . This protocol, however, may not be applicable for every patient and cannot fully replicate vigorous and prolonged physical exercise. Nevertheless, the use of dobutamine slightly increases the incidence of positive invasive FFR results compared to adenosine. IVUS imaging can be employed at rest and during dobutamine infusion, with short round trips to identify increased dynamic compression. Resting indices measurement, such as instantaneous wave‐free ratio or iFR, resting full‐cycle ratio or RFR, or diastolic hyperemia‐free ratio or DFR, have not been extensively evaluated yet. Additionally, angiography‐derived FFR calculation can be performed using quantitative flow ratio (QFR). Even though this technology is not validated for ostial lesions, a recent study has shown that a non‐significant QFR value in RCA with interarterial course was associated with a good clinical outcome at 5 years . QFR calculation utilizing the Medis QFR software is based on 3‐dimensional quantitative coronary angiography and computational fluid dynamics, employing contrast medium progression in the coronary artery (Figure ). Beyond AAOCA with an interarterial course, invasive evaluation of certain AAOCA with a subpulmonic course may be beneficial in cases of ischemic symptoms and/or non‐invasive functional testing‐induced ischemia (Figure ). A decrease in luminal area due to a deep intramyocardial passage can result in a positive invasive test. CCTA CCTA has emerged as the primary imaging modality for visualizing the origin and initial course of AAOCA . Except an abnormal CX artery origin, CCTA is recommended following the discovery of AAOCA with ICA in the adult population. Utilizing volume rendering with 3‐dimensional images facilitates the identification of ectopic courses, and clarifies their relationships with the great vessels when ICA 2‐dimensional imaging remains ambiguous (Figure ). It is crucial to distinguish between a subpulmonic or interarterial course (Figure ) to avoid inappropriate decision‐making regarding a left AAOCA. A more (> 2 mm) or less deep intramyocardial passage can be observed in left AAOCO with a subpulmonic course (Figure ). A reduction of diameter and lumen area is often noticed in cases where an intramyocardial passage is present (Figure ). Multiplanar reformatted images play a main role in evaluating anatomical features of AAOCA with interarterial course, such as luminal shape deformation, vessel narrowing (both diameter and area), and take‐off angle (Figure ). The diagnosis of intramural aortic passage generally relies on the association of the following criteria: degree of eccentricity ≥ 2.0, lumen diameter reduction ≥ 50%, and acute take‐off angle (< 45°). CCTA serves as the cornerstone of multimodality imaging, significantly contributing to the evaluation of individuals with AAOCA and guiding patient management . The identification of an intramural aortic passage is crucial for ensuring appropriate management of AAOCA with an interarterial course. CCTA can be extended beyond the scope of anatomical evaluation. Computational fluid dynamics simulations, such as those offered by HeartFlow Inc. (Redwood City, California), now provide physiological assessment (Figure ) through FFR measurement (FFR CT ) . However, it's important to note that while this non‐invasive approach offers valuable insights, it may not fully capture the spectrum of pathophysiologic changes observed in AAOCA during a strenuous physical exertion . Conclusions Locating and engaging an AAOCA during ICA can pose challenges. Having knowledge about the prevalence, site of connection, and initial ectopic course of different AAOCA can assist physicians in selecting the appropriate catheter and executing the most effective maneuvers. If necessary, a CCTA can provide more precise anatomical information. In cases of AAOCA with an interarterial course, evaluating coronary morphology and physiology using specific tools such as intravascular imaging and intracoronary hemodynamic measurements can aid in risk stratification. Continued research and clinical experience will further refine the current making decision algorithms of AAOCA. The authors declare no conflicts of interest. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information. Supporting information.
The impact of DNA extraction on the quantification of
db98d0d4-0c8e-4a47-980a-026140a222e8
11302271
Microbiology[mh]
Accurate pathogen detection and quantification in drinking water is necessary to establish effective policies and minimize the infection risk. Legionella is a large genus of waterborne bacteria comprising more than 60 species, of which approximately half have been identified as opportunistic human pathogens . The incidence of Legionnaire’s disease has been increasing worldwide in the past decades. In Switzerland, the reported incidence rate per 100,000 population was 7.24 in 2023, compared to 6.61 in 2017 and 3.54 in 2014 . Current monitoring strategies heavily rely on culture-based methods , which have well-documented limitations, including being time consuming, producing less precise results, and having a bias toward Legionella pneumophila . Molecular-based methods [i.e., quantitative/droplet digital PCR (qPCR/ddPCR)] offer an alternative to culture that addresses some limitations associated with culture-based methods but have inherent limitations of their own. For example, molecular methods can be used to obtain more precise results faster, but detect genetic material from both live and dead organisms. While some legislation already allows for molecular methods to be used for compliance with monitoring requirements , the routine use of such methods remains a future prospect. DNA extraction represents an essential step for molecular quantification of organisms that affects every downstream molecular analyses. DNA extraction methods consist of (i) lysis of cell membranes and nucleus to release DNA, (ii) separation of DNA from other cellular components and debris, and (iii) purification of the DNA. How well these steps are executed affects DNA yield, quantification of target organisms, and characterization of microbial community composition from various matrices . Although all the DNA extraction methods share these general procedures, numerous commercial kits, protocols, and ad-hoc adaptations are available and described in literature. For example, protocols.io, a large repository of laboratory protocols contributed by researchers, has more than 2,000 entries for DNA extraction methods and optimizations . While different commercial kits and adaptations are usually implemented to extract DNA from specific matrices, thus not performing optimally on others, it is notable that there are examples where even different versions of the same commercial kit can produce different amounts and qualities of extracted DNA . Despite the documented differences introduced by the DNA extraction on downstream analysis and results, no information on how different DNA extraction procedures influence the quantification of Legionella spp. is available to our knowledge. There is generally a lack of adequate reporting of DNA extraction recoveries and potential biases in the drinking water field, as highlighted by the Environmental Microbiology Minimum Information (EMMI) guidelines . To address this, the EMMI guidelines propose the use of negative and positive controls to identify potential contaminants or issues with the DNA extraction process. An external positive control is particularly important to quantify extraction efficacy. The guidelines appropriately leave the specific choice of control to the study authors , but as a result, there is no established practice for Legionella DNA extraction from environmental samples. Using a pure culture, synthetic mixture of pure cultures, or cultures phagocytized by a host organism are all logical and common choices. However, these would provide little information about how well the extraction method extracts Legionella DNA from an environmental sample with a complex mixed microbial community or about potential biases it introduces when performing sequencing analysis. From an ecological point of view, little information about the influence of DNA extraction on the microbial community characterization in drinking water systems is available, particularly when wanting to distinguish between water and biofilm phases. In this study, we processed water and biofilm samples collected via a community science sampling campaign using two DNA extraction methods common in drinking water studies to demonstrate how the variability between and among methods can impact environmental sampling interpretation, with a specific focus on the quantification of Legionella spp. and community structure. Field-scale community science sample collection and processing Participant recruitment and sampling Employees from two research institutes were recruited via Listserv email to collect shower water and biofilm samples from their homes. Participants received a pre-labeled sampling kit that contained a sterile 1-L glass bottle, a new shower hose to replace their existing one, sealing caps to retain the water within their harvested shower hose, and detailed sampling instructions. Briefly, following >8 h of stagnation, the participant opened the shower tap and filled the 1-L bottle with the first flush of hot water until the bottle was full but not overflowing. The participant then removed the shower head and detached the existing shower hose, keeping the ends at approximately the same height to prevent water in the hose from leaking out. The used shower hose was capped with new threaded PVC caps to retain the water inside the hose and replaced with a new one. Samples were delivered to the lab for processing within 24 h. The experimental workflow is shown in . Sample processing—water After mixing the 1-L sample, 100 mL was aliquoted to a sterile glass container, total cells were quantified using flow cytometry, and L. pneumophila was quantified using IDEXX Legiolert liquid culture kit (IDEXX Laboratories, Inc, Westbrook ME, USA) according to the manufacturer’s instructions. The remains of the water sample were filter-concentrated onto duplicate 0.2-µm polycarbonate filters (Steriltech Corporation, Auburn MA, USA), recording the volume filtered; each filter was fragmented using a flame-sterilized scalpel and tweezers, and frozen at −20°C until DNA extraction was carried out. Sample processing—biofilms Water contained in the shower hose was collected, and the hose exterior sheath (if applicable) was removed. The hose was filled with 20 mL of autoclaved 2-mm glass beads, and biofilm was eluted into filter-sterilized and autoclaved water through five rounds filling the hose with the sterile water, with sonication of the hose using the vibration of the beads to suspend the attached biomass, and then collecting the water within the hose after each sonication. Each time the water was collected, the end from which the water flowed from the hose was alternated to create as much flow reversal and mixing as possible. After the last round of sonication/collection, the glass beads were removed, the hose filled with 50 mL of water a final time, inverted 30 times, and water was collected into the sample bottle. The total amount of water used for biofilm elution, and the length and diameter of the hose were recorded. Total cells were measured in the suspended biofilm sample after three rounds of 30 s of sonication of a 2-mL aliquot at 40% amplitude and diluting 1:1,000 or 1:10,000 as necessary. Eluted biofilm from the sample bottle was filter-concentrated onto duplicate 0.2-µm polycarbonate filters until clogging (5–70 mL), each filter fragmented using a flame-sterilized scalpel, and frozen at −20°C until DNA extraction was carried out. DNA extraction Two commercially available DNA extraction kits (hereafter indicated as Methods A and B) with the same adaptation were used on each water and biofilm sample set. Each kit is commonly used for extracting DNA from environmental water samples. Both Methods A and B are spin-column-based DNA extraction procedures, consisting of similar protocols, which involve the use of a chemical lysis solution, a beat-beating step and a binding matrix solution prior to spin-column concentration, washing via centrifugation, and a final elution in 100 µL. The specific composition of the reagents provided for each steps was not disclosed by the respective manufacturers. Adaptations to the manufacturer instructions to each kit included submerging fragmented filters in 6 µL of Lysozyme (50 mg/µL; Thermo Fischer Scientific, Waltham MA, USA) and 294 µL of 1× TE buffer for 1 h at 37°C mixing at 300 rpm, adding 30 µL of Proteinase K (20 mg/mL; Thermo Fischer Scientific, Waltham MA, USA) and 300 µL of DNA extraction kit cell lysis solution, and continuing incubation for 30 min at 56°C mixing at 300 rpm, and adding 600 µL of chloroform (isoamyl alcohol, 24:1 suitable for nucleic acid purification) with DNA extraction kit lysis beads. These adaptations were previously described by Voslo et al. . The beads, dissolved filters, and final 1,230 µL of solution were then bead beaten on a vortex shaker at maximum speed for 5 min. Afterward, manufacturer instructions were followed. Each time DNA extraction was performed, a DNA extraction negative (un-used filter) and positive control was processed. The positive control consisted of replicate 100 mL bulk water samples collected from a bioreactor colonized by native Legionella (additional details provided in Fig. S5). The bioreactor water was collected in bulk, then 100-mL aliquots were concentrated onto replicate filters and frozen until DNA was extracted. L. pneumophila liquid culture and total cells were quantified in the bioreactor water to provide independent comparison of the extraction efficiency of L. pneumophila and total cells. The impact of each pre-treatment step on the DNA extraction has been evaluated and reported in Fig. S1. Water quality measurements and methodology Total cells counts and extracted DNA Total cells were quantified using a CytoFLEX (Beckman Coulter, Inc., Brea CA, USA) flow cytometer in 250-µL aliquots stained using SYBR® Green I (SG, Invitrogen AG, Basel, Switzerland; 10,000× diluted in Tris buffer, pH 8). Stained cells were incubated for 15 min at 37°C prior to analysis . Extracted DNA was measured using a Qubit dsDNA HS assay (Thermo Fischer Scientific, Waltham MA, USA), with a linear detection range of 0.2–100 ng double stranded DNA. Legionella spp. and L. pneumophila gene copy enumeration Legionella spp. (ssrA) and L. pneumophila (mip) were measured with (ddPCR) using gene targets based on previously published assays validated to ISO SO TS12869:201 and adapted for the ddPCR platform . Primer and probe sequences, master mix composition, and thermocycling conditions can be found in the supplemental material (Table S1). A ddPCR reaction negative control (DNAse-free water) was included for each batch of master mix prepared and was always negative. A ddPCR reaction positive control (Centre National de Référence des Légionelles) was included on each thermocycling run. Droplet formation and PCR thermocycling were performed using a Stilla geode (Stilla Technologies, Villejuif, France) and read using a Prism6 analyzer with Crystal Reader software imaging settings pre-set and optimized for PerfeCTa multiplex master mix (QuantaBio, Beverly MA, USA). Droplets were analyzed using Crystal Miner software. Only wells with enough total and analyzable droplets, as well as a limited number of saturated signals, were accepted according to Crystal Miner software quality control. Positive droplets were delineated using polygons, with positive wells being considered as those resulting in at least three droplets within the polygon. The limit of detection was 5 gc/reaction (1 gc/µL of template), and the limit of quantification was 12 gc/reaction (2.4 gc/µL of template ). Any sample with significant intermediate fluorescence clusters (i.e., “rain”) was diluted 1:10 and rerun. 16S rRNA amplicon sequencing For sequencing, the V4 region of the 16S rRNA gene was amplified by PCR using the primers Bakt_515F–Bakt_805R , and the DNA was quantified by Qubit dsDNA HS Assay (Thermo Fischer Scientific, Waltham MA, USA). Samples were diluted, where possible, to the concentration of 1 ng/µL. A two-step PCR protocol was used to prepare the sequencing library: a first amplification (target PCR) was carried out with 1× KAPA HiFi HotStart DNA polymerase (Roche, Basel, Switzerland), 0.3 µM of each 16S primer, and 2 µL of template DNA. After amplification, the PCR products were purified with the Agencourt AMPure System (Beckman Coulter, Inc., Brea, USA). The second PCR (adaptor PCR) was performed with limited cycles to attach specific sequencing Nextera v2 Index adapter (Illumina, Inc., San Diego CA, USA). After purification, the products were quantified and checked for correct length (bp) with the High Sensitivity D1000 ScreenTape system (Agilent 2200 TapeStation; Agilent Technologies, Inc., Santa Clara CA, USA). Sample concentration was adjusted, and samples were subsequently pooled together in a library at a concentration of 4 nM. The Illumina MiSeq platform was used for pair-end 600 cycles (16S) with 10% PhiX (internal standard) in the sequencing run. Negative controls (PCR-grade water) and a positive control (self-made MOCK community) were incorporated. Primer sequences, master mix composition, and reaction conditions can be found in the Supplementary Information (Table S2). These steps were performed in collaboration with the Genetic Diversity Centre (GDC) of ETH Zurich. Data analysis 16S rRNA sequencing data were processed on HPC Euler (ETHZ) using workflows established by the GDC (ETHZ, Zurich). Detailed data-processing workflows are provided in the supplementary materials. For the 16S data set, all R1 reads were trimmed (based on the error estimates) by 25 nt, the primer region removed, and quality filtered. Ultimately, sequences were denoised with error correction and chimera removal and amplicon sequence variants established using UNOISE3 . In this study, the predicted biological sequences will be referred to as zero-radius operational taxonomic units (zOTUs). Taxonomic assignment was performed using the Silva 16S database (v128) in combination with the SINTAX classifier. Samples were not rarefied to avoid the loss of data due to differences in the sequencing depth . Distance ordination and relative abundance were calculated using R (version 4.2.1) and R studio (version 2022.07.2 + 576) using the Bioconductor package “phyloseq” (version 1.42.0), . Linear discriminant analysis effect size (LefSe) analysis was performed using Microbiome Analyst . SparCC correlation analysis was performed using the software FastSpar . All graphs were constructed with the R package “ggplot2” (version 3.4.0). Unless otherwise specified, all packages were operated using the default settings. Absolute abundance for Legionella quantification was calculated as follows: Absolute Abundance = Relative Abundance (16S amplicon sequencing) × Total Cell Count (Flow Cytometry) × DNA extraction efficacy Participant recruitment and sampling Employees from two research institutes were recruited via Listserv email to collect shower water and biofilm samples from their homes. Participants received a pre-labeled sampling kit that contained a sterile 1-L glass bottle, a new shower hose to replace their existing one, sealing caps to retain the water within their harvested shower hose, and detailed sampling instructions. Briefly, following >8 h of stagnation, the participant opened the shower tap and filled the 1-L bottle with the first flush of hot water until the bottle was full but not overflowing. The participant then removed the shower head and detached the existing shower hose, keeping the ends at approximately the same height to prevent water in the hose from leaking out. The used shower hose was capped with new threaded PVC caps to retain the water inside the hose and replaced with a new one. Samples were delivered to the lab for processing within 24 h. The experimental workflow is shown in . Sample processing—water After mixing the 1-L sample, 100 mL was aliquoted to a sterile glass container, total cells were quantified using flow cytometry, and L. pneumophila was quantified using IDEXX Legiolert liquid culture kit (IDEXX Laboratories, Inc, Westbrook ME, USA) according to the manufacturer’s instructions. The remains of the water sample were filter-concentrated onto duplicate 0.2-µm polycarbonate filters (Steriltech Corporation, Auburn MA, USA), recording the volume filtered; each filter was fragmented using a flame-sterilized scalpel and tweezers, and frozen at −20°C until DNA extraction was carried out. Sample processing—biofilms Water contained in the shower hose was collected, and the hose exterior sheath (if applicable) was removed. The hose was filled with 20 mL of autoclaved 2-mm glass beads, and biofilm was eluted into filter-sterilized and autoclaved water through five rounds filling the hose with the sterile water, with sonication of the hose using the vibration of the beads to suspend the attached biomass, and then collecting the water within the hose after each sonication. Each time the water was collected, the end from which the water flowed from the hose was alternated to create as much flow reversal and mixing as possible. After the last round of sonication/collection, the glass beads were removed, the hose filled with 50 mL of water a final time, inverted 30 times, and water was collected into the sample bottle. The total amount of water used for biofilm elution, and the length and diameter of the hose were recorded. Total cells were measured in the suspended biofilm sample after three rounds of 30 s of sonication of a 2-mL aliquot at 40% amplitude and diluting 1:1,000 or 1:10,000 as necessary. Eluted biofilm from the sample bottle was filter-concentrated onto duplicate 0.2-µm polycarbonate filters until clogging (5–70 mL), each filter fragmented using a flame-sterilized scalpel, and frozen at −20°C until DNA extraction was carried out. DNA extraction Two commercially available DNA extraction kits (hereafter indicated as Methods A and B) with the same adaptation were used on each water and biofilm sample set. Each kit is commonly used for extracting DNA from environmental water samples. Both Methods A and B are spin-column-based DNA extraction procedures, consisting of similar protocols, which involve the use of a chemical lysis solution, a beat-beating step and a binding matrix solution prior to spin-column concentration, washing via centrifugation, and a final elution in 100 µL. The specific composition of the reagents provided for each steps was not disclosed by the respective manufacturers. Adaptations to the manufacturer instructions to each kit included submerging fragmented filters in 6 µL of Lysozyme (50 mg/µL; Thermo Fischer Scientific, Waltham MA, USA) and 294 µL of 1× TE buffer for 1 h at 37°C mixing at 300 rpm, adding 30 µL of Proteinase K (20 mg/mL; Thermo Fischer Scientific, Waltham MA, USA) and 300 µL of DNA extraction kit cell lysis solution, and continuing incubation for 30 min at 56°C mixing at 300 rpm, and adding 600 µL of chloroform (isoamyl alcohol, 24:1 suitable for nucleic acid purification) with DNA extraction kit lysis beads. These adaptations were previously described by Voslo et al. . The beads, dissolved filters, and final 1,230 µL of solution were then bead beaten on a vortex shaker at maximum speed for 5 min. Afterward, manufacturer instructions were followed. Each time DNA extraction was performed, a DNA extraction negative (un-used filter) and positive control was processed. The positive control consisted of replicate 100 mL bulk water samples collected from a bioreactor colonized by native Legionella (additional details provided in Fig. S5). The bioreactor water was collected in bulk, then 100-mL aliquots were concentrated onto replicate filters and frozen until DNA was extracted. L. pneumophila liquid culture and total cells were quantified in the bioreactor water to provide independent comparison of the extraction efficiency of L. pneumophila and total cells. The impact of each pre-treatment step on the DNA extraction has been evaluated and reported in Fig. S1. Employees from two research institutes were recruited via Listserv email to collect shower water and biofilm samples from their homes. Participants received a pre-labeled sampling kit that contained a sterile 1-L glass bottle, a new shower hose to replace their existing one, sealing caps to retain the water within their harvested shower hose, and detailed sampling instructions. Briefly, following >8 h of stagnation, the participant opened the shower tap and filled the 1-L bottle with the first flush of hot water until the bottle was full but not overflowing. The participant then removed the shower head and detached the existing shower hose, keeping the ends at approximately the same height to prevent water in the hose from leaking out. The used shower hose was capped with new threaded PVC caps to retain the water inside the hose and replaced with a new one. Samples were delivered to the lab for processing within 24 h. The experimental workflow is shown in . After mixing the 1-L sample, 100 mL was aliquoted to a sterile glass container, total cells were quantified using flow cytometry, and L. pneumophila was quantified using IDEXX Legiolert liquid culture kit (IDEXX Laboratories, Inc, Westbrook ME, USA) according to the manufacturer’s instructions. The remains of the water sample were filter-concentrated onto duplicate 0.2-µm polycarbonate filters (Steriltech Corporation, Auburn MA, USA), recording the volume filtered; each filter was fragmented using a flame-sterilized scalpel and tweezers, and frozen at −20°C until DNA extraction was carried out. Water contained in the shower hose was collected, and the hose exterior sheath (if applicable) was removed. The hose was filled with 20 mL of autoclaved 2-mm glass beads, and biofilm was eluted into filter-sterilized and autoclaved water through five rounds filling the hose with the sterile water, with sonication of the hose using the vibration of the beads to suspend the attached biomass, and then collecting the water within the hose after each sonication. Each time the water was collected, the end from which the water flowed from the hose was alternated to create as much flow reversal and mixing as possible. After the last round of sonication/collection, the glass beads were removed, the hose filled with 50 mL of water a final time, inverted 30 times, and water was collected into the sample bottle. The total amount of water used for biofilm elution, and the length and diameter of the hose were recorded. Total cells were measured in the suspended biofilm sample after three rounds of 30 s of sonication of a 2-mL aliquot at 40% amplitude and diluting 1:1,000 or 1:10,000 as necessary. Eluted biofilm from the sample bottle was filter-concentrated onto duplicate 0.2-µm polycarbonate filters until clogging (5–70 mL), each filter fragmented using a flame-sterilized scalpel, and frozen at −20°C until DNA extraction was carried out. Two commercially available DNA extraction kits (hereafter indicated as Methods A and B) with the same adaptation were used on each water and biofilm sample set. Each kit is commonly used for extracting DNA from environmental water samples. Both Methods A and B are spin-column-based DNA extraction procedures, consisting of similar protocols, which involve the use of a chemical lysis solution, a beat-beating step and a binding matrix solution prior to spin-column concentration, washing via centrifugation, and a final elution in 100 µL. The specific composition of the reagents provided for each steps was not disclosed by the respective manufacturers. Adaptations to the manufacturer instructions to each kit included submerging fragmented filters in 6 µL of Lysozyme (50 mg/µL; Thermo Fischer Scientific, Waltham MA, USA) and 294 µL of 1× TE buffer for 1 h at 37°C mixing at 300 rpm, adding 30 µL of Proteinase K (20 mg/mL; Thermo Fischer Scientific, Waltham MA, USA) and 300 µL of DNA extraction kit cell lysis solution, and continuing incubation for 30 min at 56°C mixing at 300 rpm, and adding 600 µL of chloroform (isoamyl alcohol, 24:1 suitable for nucleic acid purification) with DNA extraction kit lysis beads. These adaptations were previously described by Voslo et al. . The beads, dissolved filters, and final 1,230 µL of solution were then bead beaten on a vortex shaker at maximum speed for 5 min. Afterward, manufacturer instructions were followed. Each time DNA extraction was performed, a DNA extraction negative (un-used filter) and positive control was processed. The positive control consisted of replicate 100 mL bulk water samples collected from a bioreactor colonized by native Legionella (additional details provided in Fig. S5). The bioreactor water was collected in bulk, then 100-mL aliquots were concentrated onto replicate filters and frozen until DNA was extracted. L. pneumophila liquid culture and total cells were quantified in the bioreactor water to provide independent comparison of the extraction efficiency of L. pneumophila and total cells. The impact of each pre-treatment step on the DNA extraction has been evaluated and reported in Fig. S1. Total cells counts and extracted DNA Total cells were quantified using a CytoFLEX (Beckman Coulter, Inc., Brea CA, USA) flow cytometer in 250-µL aliquots stained using SYBR® Green I (SG, Invitrogen AG, Basel, Switzerland; 10,000× diluted in Tris buffer, pH 8). Stained cells were incubated for 15 min at 37°C prior to analysis . Extracted DNA was measured using a Qubit dsDNA HS assay (Thermo Fischer Scientific, Waltham MA, USA), with a linear detection range of 0.2–100 ng double stranded DNA. Legionella spp. and L. pneumophila gene copy enumeration Legionella spp. (ssrA) and L. pneumophila (mip) were measured with (ddPCR) using gene targets based on previously published assays validated to ISO SO TS12869:201 and adapted for the ddPCR platform . Primer and probe sequences, master mix composition, and thermocycling conditions can be found in the supplemental material (Table S1). A ddPCR reaction negative control (DNAse-free water) was included for each batch of master mix prepared and was always negative. A ddPCR reaction positive control (Centre National de Référence des Légionelles) was included on each thermocycling run. Droplet formation and PCR thermocycling were performed using a Stilla geode (Stilla Technologies, Villejuif, France) and read using a Prism6 analyzer with Crystal Reader software imaging settings pre-set and optimized for PerfeCTa multiplex master mix (QuantaBio, Beverly MA, USA). Droplets were analyzed using Crystal Miner software. Only wells with enough total and analyzable droplets, as well as a limited number of saturated signals, were accepted according to Crystal Miner software quality control. Positive droplets were delineated using polygons, with positive wells being considered as those resulting in at least three droplets within the polygon. The limit of detection was 5 gc/reaction (1 gc/µL of template), and the limit of quantification was 12 gc/reaction (2.4 gc/µL of template ). Any sample with significant intermediate fluorescence clusters (i.e., “rain”) was diluted 1:10 and rerun. Total cells were quantified using a CytoFLEX (Beckman Coulter, Inc., Brea CA, USA) flow cytometer in 250-µL aliquots stained using SYBR® Green I (SG, Invitrogen AG, Basel, Switzerland; 10,000× diluted in Tris buffer, pH 8). Stained cells were incubated for 15 min at 37°C prior to analysis . Extracted DNA was measured using a Qubit dsDNA HS assay (Thermo Fischer Scientific, Waltham MA, USA), with a linear detection range of 0.2–100 ng double stranded DNA. spp. and L. pneumophila gene copy enumeration Legionella spp. (ssrA) and L. pneumophila (mip) were measured with (ddPCR) using gene targets based on previously published assays validated to ISO SO TS12869:201 and adapted for the ddPCR platform . Primer and probe sequences, master mix composition, and thermocycling conditions can be found in the supplemental material (Table S1). A ddPCR reaction negative control (DNAse-free water) was included for each batch of master mix prepared and was always negative. A ddPCR reaction positive control (Centre National de Référence des Légionelles) was included on each thermocycling run. Droplet formation and PCR thermocycling were performed using a Stilla geode (Stilla Technologies, Villejuif, France) and read using a Prism6 analyzer with Crystal Reader software imaging settings pre-set and optimized for PerfeCTa multiplex master mix (QuantaBio, Beverly MA, USA). Droplets were analyzed using Crystal Miner software. Only wells with enough total and analyzable droplets, as well as a limited number of saturated signals, were accepted according to Crystal Miner software quality control. Positive droplets were delineated using polygons, with positive wells being considered as those resulting in at least three droplets within the polygon. The limit of detection was 5 gc/reaction (1 gc/µL of template), and the limit of quantification was 12 gc/reaction (2.4 gc/µL of template ). Any sample with significant intermediate fluorescence clusters (i.e., “rain”) was diluted 1:10 and rerun. For sequencing, the V4 region of the 16S rRNA gene was amplified by PCR using the primers Bakt_515F–Bakt_805R , and the DNA was quantified by Qubit dsDNA HS Assay (Thermo Fischer Scientific, Waltham MA, USA). Samples were diluted, where possible, to the concentration of 1 ng/µL. A two-step PCR protocol was used to prepare the sequencing library: a first amplification (target PCR) was carried out with 1× KAPA HiFi HotStart DNA polymerase (Roche, Basel, Switzerland), 0.3 µM of each 16S primer, and 2 µL of template DNA. After amplification, the PCR products were purified with the Agencourt AMPure System (Beckman Coulter, Inc., Brea, USA). The second PCR (adaptor PCR) was performed with limited cycles to attach specific sequencing Nextera v2 Index adapter (Illumina, Inc., San Diego CA, USA). After purification, the products were quantified and checked for correct length (bp) with the High Sensitivity D1000 ScreenTape system (Agilent 2200 TapeStation; Agilent Technologies, Inc., Santa Clara CA, USA). Sample concentration was adjusted, and samples were subsequently pooled together in a library at a concentration of 4 nM. The Illumina MiSeq platform was used for pair-end 600 cycles (16S) with 10% PhiX (internal standard) in the sequencing run. Negative controls (PCR-grade water) and a positive control (self-made MOCK community) were incorporated. Primer sequences, master mix composition, and reaction conditions can be found in the Supplementary Information (Table S2). These steps were performed in collaboration with the Genetic Diversity Centre (GDC) of ETH Zurich. 16S rRNA sequencing data were processed on HPC Euler (ETHZ) using workflows established by the GDC (ETHZ, Zurich). Detailed data-processing workflows are provided in the supplementary materials. For the 16S data set, all R1 reads were trimmed (based on the error estimates) by 25 nt, the primer region removed, and quality filtered. Ultimately, sequences were denoised with error correction and chimera removal and amplicon sequence variants established using UNOISE3 . In this study, the predicted biological sequences will be referred to as zero-radius operational taxonomic units (zOTUs). Taxonomic assignment was performed using the Silva 16S database (v128) in combination with the SINTAX classifier. Samples were not rarefied to avoid the loss of data due to differences in the sequencing depth . Distance ordination and relative abundance were calculated using R (version 4.2.1) and R studio (version 2022.07.2 + 576) using the Bioconductor package “phyloseq” (version 1.42.0), . Linear discriminant analysis effect size (LefSe) analysis was performed using Microbiome Analyst . SparCC correlation analysis was performed using the software FastSpar . All graphs were constructed with the R package “ggplot2” (version 3.4.0). Unless otherwise specified, all packages were operated using the default settings. Absolute abundance for Legionella quantification was calculated as follows: Absolute Abundance = Relative Abundance (16S amplicon sequencing) × Total Cell Count (Flow Cytometry) × DNA extraction efficacy Comparison of DNA extraction efficacies The two DNA extraction methods used in this study yielded different amounts of DNA . Method A extracted substantially more DNA overall (median value 1.03 fg/cell, IQR: 1.70; n = 83) than method B (median value 0.03 fg/cell, IQR: 0.06; n = 74) (Wilcox paired test —P -value: <0.001). The median DNA/cell value detected with method A was 186-fold higher than that of method B for water samples, but only 13-fold higher for biofilm samples. However, this could partially be artificial due to the fact that biofilm samples seem to reach a plateau at approximately 10 4 ng of extracted DNA (see Water and biofilm phase). Method B failed to extract detectable levels of DNA from 15 out of 50 water samples. These samples with low/no extracted DNA were still used for Legionella quantification through ddPCR, but were excluded from the sequencing run. To calculate the extraction efficacy for each sample, an average DNA-per-cell value of 4 fg/cell was used . The median value of the extraction efficacy for the water samples extracted with method A was 35%, while it was 0.2% for method B. For biofilm samples, the median value was 21.7% for the ones extracted with method A and 1.6% for method B. A few samples had an extraction efficacy above 100%, which is likely because the chosen value of 4 fg/cell does not consider the heterogeneity in DNA content across different bacteria, particularly in complex communities . Impact on Legionella quantification The differences in the DNA extraction affected the detection and quantification of Legionella spp. and L. pneumophila . The quantification of Legionella spp. using ddPCR was significantly higher (Wilcox paired test —P -value < 0.001) for the samples extracted with method A than the ones extracted with method B, showing a 39-fold median increase of gene copies detected per 100 mL . Legionella spp. was not detected with ddPCR in 21 water samples from method B and 10 samples (one water and nine biofilm) from method A , but in all cases where Legionella spp. was not detected, it was detected in the same samples with the other method. In nine samples among the 21 non-detected by method B, there was no quantifiable DNA in the first place (Comparison of DNA extraction efficacies). As with Legionella spp., the quantification of L. pneumophila using ddPCR was significantly higher (Wilcox paired test —P -value < 0.001) for the samples extracted with method A than the ones extracted with method B, with a 44-fold increase of gene copies per 100 mL detected (Fig. S2). Overall, there was a 50% and 45% of presence–absence agreement between ddPCR and cultivation in the samples for methods A and B, respectively (Fig. S3). As expected, cases were observed, in which both methods detected L. pneumophila DNA, while the cultivation data resulted negative. Method A had 16 samples that were positive by ddPCR but negative by culture ( n = 43, 37%); method B had 10 ( n = 42, 24%). However, there were also samples positive by culture but not by ddPCR. Method A had six samples that were culture positive but ddPCR negative (13%), while method B had 13 (31%). While this could be attributed to low DNA extraction efficacy for the 13 samples extracted with method B, the six samples extracted with method A showed overall good recovery (DNA extraction efficacy >50%), which cannot justify this observation. When determining the relative abundance of Legionella spp. from sequencing data, highly variable results were observed between methods. Legionella spp. was not detected through amplicon sequencing in six samples (four water, two biofilm) extracted with method B, and in four samples (two water, two biofilm) extracted with method A, and one sample with both methods. For a direct quantitative comparison with the ddPCR data, absolute abundance was calculated by multiplying by the TCC while accounting for the DNA extraction efficacy . Quantification of Legionella spp. using amplicon-sequencing-derived absolute abundance remained statistically higher for method A than for method B, showing a 50-fold difference (Wilcox paired test —P -value: <0.001). Impact on ecological observations Besides Legionella quantification, the differences in the extraction methods also affected the community structure detected in the samples: bacterial genera were detected with different abundances in samples extracted with either method A or B. The distance between samples was calculated using the Bray–Curtis index, which resulted in an Non-metric Multi-Dimensional Scaling (NMDS) plot with a stress value of 0.222 . The analysis showed more significant clustering according to the phase of the samples [i.e., water or biofilm; permutational multivariate analysis of variance (PERMANOVA), P -value < 0.001, R 2 = 0.03], though the samples also cluster significantly with respect to the two DNA extraction methods used (PERMANOVA, P -value < 0.001, R 2 = 0.02). This suggests that the selection of extract kit changes the community composition. To provide a better overview of the ecological differences caused by the different DNA extraction methods, LefSe analysis was used to determine the genera that are most likely to explain differences between the two . The analysis shows that the relative abundance of 52 genera are enriched in either method A (32 genera) or method B (20 genera) with linear discriminant analysis (LDA) scores ranging between 2 and 5. Based on the effect size, we detected the genera Sphingomonas , Thermus , and Gemmata to be the most enriched in method A (LDA scores: 5, 4.81, 4.59), while Caulobacter , Obscuribacteraceae , and Hirschia were found to be the genera with a stronger effect size for method B (LDA scores: 5.05, 4.87, 4.52) . While the genus Legionella was not detected by this statistical analysis as enriched in one specific method , a LefSe analysis performed at the zOTU level was able to detect six Legionella -associated zOTUs (zOTU492, zOTU620, zOTU1267, zOTU1386, zOTU1375, zOTU1565) whose relative abundances were all enriched in method B (LDA score ranging 1.77 to 3.33), but not in method A. In one case, a Legionella zOTU (zOTU1267) was classified to the species level, and assigned to L. geestiana , while the other five zOTUs only had a classification to the genus level. SparCC analysis was performed to detect correlations between the zOTUs linked to the genus Legionella and the ones representing the rest of the community. For the biofilm samples extracted with method A, 53 significant correlations (correlation coefficient >0.4) were detected ( n samples = 45), while 3,442 correlations were detected in the samples extracted with method B ( n = 45). For the water phase, 83 significant correlations were detected in the samples extracted with method A ( n = 39), while 225 were detected in the ones extracted with method B ( n = 22). Among the correlations detected, five were shared between the methods in the biofilm phase, while 35 were shared in the water samples. The two DNA extraction methods used in this study yielded different amounts of DNA . Method A extracted substantially more DNA overall (median value 1.03 fg/cell, IQR: 1.70; n = 83) than method B (median value 0.03 fg/cell, IQR: 0.06; n = 74) (Wilcox paired test —P -value: <0.001). The median DNA/cell value detected with method A was 186-fold higher than that of method B for water samples, but only 13-fold higher for biofilm samples. However, this could partially be artificial due to the fact that biofilm samples seem to reach a plateau at approximately 10 4 ng of extracted DNA (see Water and biofilm phase). Method B failed to extract detectable levels of DNA from 15 out of 50 water samples. These samples with low/no extracted DNA were still used for Legionella quantification through ddPCR, but were excluded from the sequencing run. To calculate the extraction efficacy for each sample, an average DNA-per-cell value of 4 fg/cell was used . The median value of the extraction efficacy for the water samples extracted with method A was 35%, while it was 0.2% for method B. For biofilm samples, the median value was 21.7% for the ones extracted with method A and 1.6% for method B. A few samples had an extraction efficacy above 100%, which is likely because the chosen value of 4 fg/cell does not consider the heterogeneity in DNA content across different bacteria, particularly in complex communities . Legionella quantification The differences in the DNA extraction affected the detection and quantification of Legionella spp. and L. pneumophila . The quantification of Legionella spp. using ddPCR was significantly higher (Wilcox paired test —P -value < 0.001) for the samples extracted with method A than the ones extracted with method B, showing a 39-fold median increase of gene copies detected per 100 mL . Legionella spp. was not detected with ddPCR in 21 water samples from method B and 10 samples (one water and nine biofilm) from method A , but in all cases where Legionella spp. was not detected, it was detected in the same samples with the other method. In nine samples among the 21 non-detected by method B, there was no quantifiable DNA in the first place (Comparison of DNA extraction efficacies). As with Legionella spp., the quantification of L. pneumophila using ddPCR was significantly higher (Wilcox paired test —P -value < 0.001) for the samples extracted with method A than the ones extracted with method B, with a 44-fold increase of gene copies per 100 mL detected (Fig. S2). Overall, there was a 50% and 45% of presence–absence agreement between ddPCR and cultivation in the samples for methods A and B, respectively (Fig. S3). As expected, cases were observed, in which both methods detected L. pneumophila DNA, while the cultivation data resulted negative. Method A had 16 samples that were positive by ddPCR but negative by culture ( n = 43, 37%); method B had 10 ( n = 42, 24%). However, there were also samples positive by culture but not by ddPCR. Method A had six samples that were culture positive but ddPCR negative (13%), while method B had 13 (31%). While this could be attributed to low DNA extraction efficacy for the 13 samples extracted with method B, the six samples extracted with method A showed overall good recovery (DNA extraction efficacy >50%), which cannot justify this observation. When determining the relative abundance of Legionella spp. from sequencing data, highly variable results were observed between methods. Legionella spp. was not detected through amplicon sequencing in six samples (four water, two biofilm) extracted with method B, and in four samples (two water, two biofilm) extracted with method A, and one sample with both methods. For a direct quantitative comparison with the ddPCR data, absolute abundance was calculated by multiplying by the TCC while accounting for the DNA extraction efficacy . Quantification of Legionella spp. using amplicon-sequencing-derived absolute abundance remained statistically higher for method A than for method B, showing a 50-fold difference (Wilcox paired test —P -value: <0.001). Besides Legionella quantification, the differences in the extraction methods also affected the community structure detected in the samples: bacterial genera were detected with different abundances in samples extracted with either method A or B. The distance between samples was calculated using the Bray–Curtis index, which resulted in an Non-metric Multi-Dimensional Scaling (NMDS) plot with a stress value of 0.222 . The analysis showed more significant clustering according to the phase of the samples [i.e., water or biofilm; permutational multivariate analysis of variance (PERMANOVA), P -value < 0.001, R 2 = 0.03], though the samples also cluster significantly with respect to the two DNA extraction methods used (PERMANOVA, P -value < 0.001, R 2 = 0.02). This suggests that the selection of extract kit changes the community composition. To provide a better overview of the ecological differences caused by the different DNA extraction methods, LefSe analysis was used to determine the genera that are most likely to explain differences between the two . The analysis shows that the relative abundance of 52 genera are enriched in either method A (32 genera) or method B (20 genera) with linear discriminant analysis (LDA) scores ranging between 2 and 5. Based on the effect size, we detected the genera Sphingomonas , Thermus , and Gemmata to be the most enriched in method A (LDA scores: 5, 4.81, 4.59), while Caulobacter , Obscuribacteraceae , and Hirschia were found to be the genera with a stronger effect size for method B (LDA scores: 5.05, 4.87, 4.52) . While the genus Legionella was not detected by this statistical analysis as enriched in one specific method , a LefSe analysis performed at the zOTU level was able to detect six Legionella -associated zOTUs (zOTU492, zOTU620, zOTU1267, zOTU1386, zOTU1375, zOTU1565) whose relative abundances were all enriched in method B (LDA score ranging 1.77 to 3.33), but not in method A. In one case, a Legionella zOTU (zOTU1267) was classified to the species level, and assigned to L. geestiana , while the other five zOTUs only had a classification to the genus level. SparCC analysis was performed to detect correlations between the zOTUs linked to the genus Legionella and the ones representing the rest of the community. For the biofilm samples extracted with method A, 53 significant correlations (correlation coefficient >0.4) were detected ( n samples = 45), while 3,442 correlations were detected in the samples extracted with method B ( n = 45). For the water phase, 83 significant correlations were detected in the samples extracted with method A ( n = 39), while 225 were detected in the ones extracted with method B ( n = 22). Among the correlations detected, five were shared between the methods in the biofilm phase, while 35 were shared in the water samples. The choice of the DNA extraction method is important DNA extraction is a crucial step for downstream molecular analysis and has been identified as one of the most problematic steps, since it is prone to introduce biases and inefficiencies in extracting DNA . Ineffective DNA extraction with lower DNA yields results in lower ddPCR concentrations for specific target organisms . This was previously demonstrated by Cerca and colleagues showing that DNA extraction efficacy is likely to influence the quantification of target organisms by qPCR in polymicrobial consortia. The authors extracted gDNA from three microorganisms in pure cultures at known concentrations (individually or collectively cultured) and performed qPCR generating different calibration curves, of which only the one that corrected for the DNA loss during the DNA extraction provided reliable results. Several previous ecology studies noted the potential influence of DNA extraction methods on DNA yield, bacterial diversity, and relative abundance measurements, for example, in soil samples , human breast milk , and faecal samples . However, only a few studies have so far investigated the impacts of different DNA extraction methods on drinking water samples . While each of these studies identified the method that fits the need of the proposed experimental design, a perfect universal DNA extraction method does not exist at present, and the efficacy of the extraction protocol applied is dependent on the sample type or presence of specific organisms . While these methodological differences are not surprising per se , little data are available on how DNA extraction influences detection and quantification of microorganisms in drinking water, how this specifically affects the quantification of the opportunistic pathogenic Legionella species, and how these challenges/problems can be mitigated if molecular methods are to be used quantitatively for routine monitoring within legislative frameworks. We found that different methods extracted significantly different amounts of DNA from the same samples and that this consequently influenced the quantification of Legionella via ddPCR . We furthermore showed the extraction efficacy relative to an ideal situation in which all cellular DNA is extracted using an average estimation of cellular DNA content . However, this is an approximation that does not account for the variations in DNA content among different bacteria and requires access to a flow cytometry to obtain the total cell count. Other studies report the addition of external spike-in sequences to measure recovery of a known amount of target . This approach does not provide, however, any information about the correct lysis of the cells. The latter can be achieved by spiking whole-cell microorganisms at a defined amount, which provides quantifiable recovery of a target organism (or surrogate microbes) and is an approach also advocated for in the EMMI guidelines . Regardless of the approach used, here, we advocate for consistent measurement and reporting of the extraction efficacy and the way it has been calculated, particularly in studies where quantitative data are generated. Such information allows an estimation of the DNA loss and the potential for a corrective factor to be applied to quantitative data. Water and biofilm phase The water and biofilm phases revealed interesting differences in terms of DNA extraction. Both methods reported a lower DNA yield in DNA-per-cell for the water samples compared to the paired biofilm samples, but this effect was more pronounced for method B, which failed to extract detectable DNA from multiple water samples . While the lower yield can be partially attributed to the presence of extracellular DNA in the biofilm matrix , the poor extraction efficacy recorded overall with method B can only be attributed to the weak performance of the method itself. We observed a maximum DNA recovery of approximately 10 4 ng, which appears to be an artificial limit at high cell concentrations . One possible explanation for this could be that there is a maximum amount of nucleic acid that can bind to the extraction column used, which is referred to as binding capacity. While every kit has a distinct binding capacity, the plateau observed in our results is consistent to that reported by several manufacturers, and this can potentially lead to an underestimation of DNA concentrations and, by consequence, of specific target organisms when processing high biomass samples. Our study also revealed a clear separation in the microbial communities detected in the water and in the biofilm phase, irrespective of the extraction method that was used . This is an important ecological observation, and only a few other papers have previously reported that different communities inhabit the two phases in a distinctive way. In an ecological investigation of the communities in an unchlorinated distributions system, Liu and colleagues showed that bulk water and pipe biofilm clustered differently in a principal component ordination analysis, identifying key species dominating either the water (e.g., Polaromonas spp.) or the biofilm (e.g., Pseudomonas spp., Sphingomonas spp.). Similarly, Proctor and colleagues observed that the biofilm and bulk water bacterial communities were significantly different in shower hoses . In particular, it was shown that three taxa accounting for 91% of the biofilm sequences only accounted for 31% of the cold water sequences detected, while seven predominant taxa in the cold water accounted only for 2% of the biofilm sequences detected. One possible explanation for this phenomenon can be attributed to early establishment of defined ecological niches within the biofilm compared to higher dynamics of bulk water due to flow and exchange of nutrients. Biofilms have lower richness than the bulk water, meaning that only a few species dominate the biofilm ecological niches . This suggests that there is relatively low ability for transient bulk water organisms to establish in developed biofilms. This could also explain why it has been observed that the communities of biofilms are more stable in time than the ones present in water . With respect to Legionella , these differences are important as it is known that the pathogen lives at higher concentrations in biofilms , as is the case with most opportunistic building plumbing pathogens . We observed this phase difference despite the fact that all shower hoses in our experiments were collected after stagnation, thus perhaps providing more opportunities for an exchange between the two phases. However, the result could partially be attributed to the sampling strategy adopted: we collected 1 L of water samples, thus not only the water that was stagnating in the shower hose (approximately 80 mL). The additional water coming from building plumbing, apart from the shower hoses, might have masked any exchange between the two phases to some extent, thus increasing the magnitude of the separation observed . This highlights the importance of understanding and correctly reporting all upstream processing steps that may influence downstream analysis; it also demonstrates the importance of how considering both phases in ecological studies and routine monitoring can provide more information than typical monitoring plans. Previous studies have already adopted strategies for on-site removal and replacement of portions of biofilm-containing pipes, through dedicated points of entrance and aseptic insertion of new coupons . However, it is not possible to establish how small portions can be representative of the entire distribution network biofilm, which is an issue that must be further investigated. Relative and absolute abundance To quantitatively compare the abundances of microorganisms across different samples and studies analyzed with 16S amplicon sequencing, it is useful to convert the relative abundances values into absolute abundance . Several studies have reported the potential occurrence of opportunistic pathogens in environmental samples using sequencing data in the form of relative abundance . While these observations are ecologically relevant, the use of relative abundance does not provide quantitative information on concentrations. An estimation of the absolute abundance can be obtained using different methods, of which the most used are qPCR quantification of the total 16S rRNA gene and flow cytometric total cell counts, chosen for this study . Some considerations are, however, necessary. Since both qPCR and sequencing use the same extract that is subjected to the same methodological biases, calculating the absolute abundance with 16S qPCR data has the downside of carrying any DNA loss occurred during the extraction, leading to the generation of inaccurate results. In this case, it is therefore necessary to account for the DNA loss during the extraction (i.e., determining the DNA extraction efficacy) for the calculation of the absolute abundance. Moreover, calculating the absolute abundance with qPCR harbors additional limitations: previous studies have, in fact, observed that qPCR can detect only large changes in gene concentrations and is strongly influenced by the primer pairs and the reaction conditions . The quantification using flow cytometry, on the contrary, can overcome these limitations, as it is not necessary to calculate the DNA extraction efficacy separately to obtain the absolute abundance. Relevance of an accurate Legionella detection and quantification The use of different DNA extraction methods clearly has variable outcomes in relation to the accurate detection and quantification of target organisms (in this case Legionella spp.) in drinking water systems. This is relevant from two perspectives: (i) the implications for microbial ecology studies and (ii) the implications for the monitoring of Legionella spp. under regulatory settings. Ecology Ecological studies often work toward providing insights into the microbial composition of given samples under specific conditions, but an accurate detection of the organisms involved and their relative proportions are crucial to define valuable biological interpretations of the data collected. The most common reported examples of how DNA extraction affects ecological observations are related to the studies conducted on the human microbiome. For example, in a study aiming at establishing the impact of DNA extraction procedures on the assessment of human gut composition, Kennedy and colleagues demonstrated that, within individual patients, community structures clustered together based on the kit used to extract the DNA from the samples . This effect was even more pronounced depending on the distance metrics that were used, and this had an important value given that the study was aimed at showing differences in community composition between volunteers and patients with inflammatory bowel disease . In a different study involving the human oral microbiome, Lazarevic and colleagues found that while the most abundant taxa were detected in the samples extracted with both methods, for some genera, the relative abundances were significantly different depending on the kit used . Similar results were obtained in a study that compared DNA extraction kits and primer sets for freshwater sediments samples: no significant diversity in terms of community structure was, in fact, detected, but the relative abundance of specific taxa varied significantly . Moreover, this study also highlighted differences in richness and relative abundance for the eukaryotic communities detected in samples extracted with two different kits. In the context of the microbial ecology of Legionella , the relevance of this is mainly linked to the observational studies aiming at understanding how this opportunistic pathogen lives in aquatic systems through their relationship with the surrounding organisms . In a previous paper from our group, for example, we observed that Legionella spp. correlated positively and negatively with prokaryotic and eukaryotic microorganisms in biofilm samples from plumbing systems . These studies used different combinations of statistical approaches that process the sequencing outcome (often in the form of relative abundance) to infer correlations. In this context, biases introduced by the DNA extraction (in terms of community structure, abundance and amplicon sequence variants detected) can influence the analysis leading to wrong ecological interpretations. Our results, for example, demonstrate that the correlations inferred with SparCC when using the sequencing data of samples extracted with different methods produce variable outcomes, both in the number of correlations and identity of the zOTUs involved. We argue that a correct understanding of the microbial ecology is crucial for the control of the pathogen in drinking water systems, and the possibility of comparing studies is an important tool toward this goal; however, biases due to the molecular protocol applied (i.e., DNA extraction) can work against the formulation of correct ecological hypothesis. Legislative compliance and risk assessment A reliable quantification of Legionella spp. is important to accurately control the level of the pathogen in engineered aquatic systems and to assess the risk linked to its presence. Quantitative microbial risk assessment (QMRA) uses information regarding pathogen concentration to determine health implications of microbial hazards . Thus, the concentrations of the pathogenic organism investigated are of extreme importance to establish the risk associated with its exposure. With respect to QMRA of Legionella , most studies use concentrations measured with conventional culture approaches, which likely underestimates actual concentrations and, in turn, may lead to the underestimation of risk . Molecular methods (including qPCR or ddPCR) are generally more sensitive and overcome some of the limitations of traditional culture approaches, but, as demonstrated in this study, are subject to errors arising from DNA extraction efficacy differences. Therefore, the future inclusion of molecular methods in QMRA requires careful consideration, documentation, and reporting of the entire sampling processing pipeline. Similarly, the effects of the DNA extraction on the quantification of Legionella can also have an impact on the routine monitoring for the presence of Legionella . Normally, the water phase is collected, filtered, and then plated onto buffered charcoal yeast extract (BCYE) agar plates for enumeration and culture confirmation . Investigations of the presence of Legionella in environmental samples, which can be used as a surveillance strategy, are often carried out using a culture-dependent approach. For example, several studies have reported environmental monitoring of Legionella in hospital settings over multiple years, to prevent nosocomial infections and link cases of Legionnaires’ disease with the environmental source of infection . Interventions are required when the concentration of Legionella reaches the threshold indicated by the national authorities (in Switzerland, 1,000 colony-forming units (CFU)/L . This entire process, however, is time and labor consuming as culturing Legionella typically takes 7–14 days. Moreover, culturing does not account for bacteria in viable-but-not-cultivable state . Therefore, the demand for the implementation of molecular methods (i.e., quantification of Legionella through qPCR/ddPCR) in the context of assessing water quality has increasingly spread across practitioners and authorities, and in fact, for example, the new EU legislation allows for alternative methods to be used . The implementation of such molecular methods in routine monitoring calls for a more detailed and standardized reporting of the protocols used. Our data not only demonstrates the variability in terms of concentration of Legionella when using different DNA extraction methods but also highlights how the pathogen is not detected in some samples extracted with one method, while it is quantified when the DNA is extracted with the other method used in this study. This would obviously influence the legal settings, since a wrong estimation of the concentration of Legionella due to extraction biases can lead to either unnecessary interventions (which are expensive for the practitioners) or increased risks. While being aware of the challenges involved in the standardization of methods, we strongly believe that an extensive reporting of the protocol details (e.g., DNA extraction method used, extraction efficacy, and how it was calculated) would be beneficial for a reliable comparison among studies, monitoring strategies and regulations. DNA extraction is a crucial step for downstream molecular analysis and has been identified as one of the most problematic steps, since it is prone to introduce biases and inefficiencies in extracting DNA . Ineffective DNA extraction with lower DNA yields results in lower ddPCR concentrations for specific target organisms . This was previously demonstrated by Cerca and colleagues showing that DNA extraction efficacy is likely to influence the quantification of target organisms by qPCR in polymicrobial consortia. The authors extracted gDNA from three microorganisms in pure cultures at known concentrations (individually or collectively cultured) and performed qPCR generating different calibration curves, of which only the one that corrected for the DNA loss during the DNA extraction provided reliable results. Several previous ecology studies noted the potential influence of DNA extraction methods on DNA yield, bacterial diversity, and relative abundance measurements, for example, in soil samples , human breast milk , and faecal samples . However, only a few studies have so far investigated the impacts of different DNA extraction methods on drinking water samples . While each of these studies identified the method that fits the need of the proposed experimental design, a perfect universal DNA extraction method does not exist at present, and the efficacy of the extraction protocol applied is dependent on the sample type or presence of specific organisms . While these methodological differences are not surprising per se , little data are available on how DNA extraction influences detection and quantification of microorganisms in drinking water, how this specifically affects the quantification of the opportunistic pathogenic Legionella species, and how these challenges/problems can be mitigated if molecular methods are to be used quantitatively for routine monitoring within legislative frameworks. We found that different methods extracted significantly different amounts of DNA from the same samples and that this consequently influenced the quantification of Legionella via ddPCR . We furthermore showed the extraction efficacy relative to an ideal situation in which all cellular DNA is extracted using an average estimation of cellular DNA content . However, this is an approximation that does not account for the variations in DNA content among different bacteria and requires access to a flow cytometry to obtain the total cell count. Other studies report the addition of external spike-in sequences to measure recovery of a known amount of target . This approach does not provide, however, any information about the correct lysis of the cells. The latter can be achieved by spiking whole-cell microorganisms at a defined amount, which provides quantifiable recovery of a target organism (or surrogate microbes) and is an approach also advocated for in the EMMI guidelines . Regardless of the approach used, here, we advocate for consistent measurement and reporting of the extraction efficacy and the way it has been calculated, particularly in studies where quantitative data are generated. Such information allows an estimation of the DNA loss and the potential for a corrective factor to be applied to quantitative data. The water and biofilm phases revealed interesting differences in terms of DNA extraction. Both methods reported a lower DNA yield in DNA-per-cell for the water samples compared to the paired biofilm samples, but this effect was more pronounced for method B, which failed to extract detectable DNA from multiple water samples . While the lower yield can be partially attributed to the presence of extracellular DNA in the biofilm matrix , the poor extraction efficacy recorded overall with method B can only be attributed to the weak performance of the method itself. We observed a maximum DNA recovery of approximately 10 4 ng, which appears to be an artificial limit at high cell concentrations . One possible explanation for this could be that there is a maximum amount of nucleic acid that can bind to the extraction column used, which is referred to as binding capacity. While every kit has a distinct binding capacity, the plateau observed in our results is consistent to that reported by several manufacturers, and this can potentially lead to an underestimation of DNA concentrations and, by consequence, of specific target organisms when processing high biomass samples. Our study also revealed a clear separation in the microbial communities detected in the water and in the biofilm phase, irrespective of the extraction method that was used . This is an important ecological observation, and only a few other papers have previously reported that different communities inhabit the two phases in a distinctive way. In an ecological investigation of the communities in an unchlorinated distributions system, Liu and colleagues showed that bulk water and pipe biofilm clustered differently in a principal component ordination analysis, identifying key species dominating either the water (e.g., Polaromonas spp.) or the biofilm (e.g., Pseudomonas spp., Sphingomonas spp.). Similarly, Proctor and colleagues observed that the biofilm and bulk water bacterial communities were significantly different in shower hoses . In particular, it was shown that three taxa accounting for 91% of the biofilm sequences only accounted for 31% of the cold water sequences detected, while seven predominant taxa in the cold water accounted only for 2% of the biofilm sequences detected. One possible explanation for this phenomenon can be attributed to early establishment of defined ecological niches within the biofilm compared to higher dynamics of bulk water due to flow and exchange of nutrients. Biofilms have lower richness than the bulk water, meaning that only a few species dominate the biofilm ecological niches . This suggests that there is relatively low ability for transient bulk water organisms to establish in developed biofilms. This could also explain why it has been observed that the communities of biofilms are more stable in time than the ones present in water . With respect to Legionella , these differences are important as it is known that the pathogen lives at higher concentrations in biofilms , as is the case with most opportunistic building plumbing pathogens . We observed this phase difference despite the fact that all shower hoses in our experiments were collected after stagnation, thus perhaps providing more opportunities for an exchange between the two phases. However, the result could partially be attributed to the sampling strategy adopted: we collected 1 L of water samples, thus not only the water that was stagnating in the shower hose (approximately 80 mL). The additional water coming from building plumbing, apart from the shower hoses, might have masked any exchange between the two phases to some extent, thus increasing the magnitude of the separation observed . This highlights the importance of understanding and correctly reporting all upstream processing steps that may influence downstream analysis; it also demonstrates the importance of how considering both phases in ecological studies and routine monitoring can provide more information than typical monitoring plans. Previous studies have already adopted strategies for on-site removal and replacement of portions of biofilm-containing pipes, through dedicated points of entrance and aseptic insertion of new coupons . However, it is not possible to establish how small portions can be representative of the entire distribution network biofilm, which is an issue that must be further investigated. To quantitatively compare the abundances of microorganisms across different samples and studies analyzed with 16S amplicon sequencing, it is useful to convert the relative abundances values into absolute abundance . Several studies have reported the potential occurrence of opportunistic pathogens in environmental samples using sequencing data in the form of relative abundance . While these observations are ecologically relevant, the use of relative abundance does not provide quantitative information on concentrations. An estimation of the absolute abundance can be obtained using different methods, of which the most used are qPCR quantification of the total 16S rRNA gene and flow cytometric total cell counts, chosen for this study . Some considerations are, however, necessary. Since both qPCR and sequencing use the same extract that is subjected to the same methodological biases, calculating the absolute abundance with 16S qPCR data has the downside of carrying any DNA loss occurred during the extraction, leading to the generation of inaccurate results. In this case, it is therefore necessary to account for the DNA loss during the extraction (i.e., determining the DNA extraction efficacy) for the calculation of the absolute abundance. Moreover, calculating the absolute abundance with qPCR harbors additional limitations: previous studies have, in fact, observed that qPCR can detect only large changes in gene concentrations and is strongly influenced by the primer pairs and the reaction conditions . The quantification using flow cytometry, on the contrary, can overcome these limitations, as it is not necessary to calculate the DNA extraction efficacy separately to obtain the absolute abundance. Legionella detection and quantification The use of different DNA extraction methods clearly has variable outcomes in relation to the accurate detection and quantification of target organisms (in this case Legionella spp.) in drinking water systems. This is relevant from two perspectives: (i) the implications for microbial ecology studies and (ii) the implications for the monitoring of Legionella spp. under regulatory settings. Ecology Ecological studies often work toward providing insights into the microbial composition of given samples under specific conditions, but an accurate detection of the organisms involved and their relative proportions are crucial to define valuable biological interpretations of the data collected. The most common reported examples of how DNA extraction affects ecological observations are related to the studies conducted on the human microbiome. For example, in a study aiming at establishing the impact of DNA extraction procedures on the assessment of human gut composition, Kennedy and colleagues demonstrated that, within individual patients, community structures clustered together based on the kit used to extract the DNA from the samples . This effect was even more pronounced depending on the distance metrics that were used, and this had an important value given that the study was aimed at showing differences in community composition between volunteers and patients with inflammatory bowel disease . In a different study involving the human oral microbiome, Lazarevic and colleagues found that while the most abundant taxa were detected in the samples extracted with both methods, for some genera, the relative abundances were significantly different depending on the kit used . Similar results were obtained in a study that compared DNA extraction kits and primer sets for freshwater sediments samples: no significant diversity in terms of community structure was, in fact, detected, but the relative abundance of specific taxa varied significantly . Moreover, this study also highlighted differences in richness and relative abundance for the eukaryotic communities detected in samples extracted with two different kits. In the context of the microbial ecology of Legionella , the relevance of this is mainly linked to the observational studies aiming at understanding how this opportunistic pathogen lives in aquatic systems through their relationship with the surrounding organisms . In a previous paper from our group, for example, we observed that Legionella spp. correlated positively and negatively with prokaryotic and eukaryotic microorganisms in biofilm samples from plumbing systems . These studies used different combinations of statistical approaches that process the sequencing outcome (often in the form of relative abundance) to infer correlations. In this context, biases introduced by the DNA extraction (in terms of community structure, abundance and amplicon sequence variants detected) can influence the analysis leading to wrong ecological interpretations. Our results, for example, demonstrate that the correlations inferred with SparCC when using the sequencing data of samples extracted with different methods produce variable outcomes, both in the number of correlations and identity of the zOTUs involved. We argue that a correct understanding of the microbial ecology is crucial for the control of the pathogen in drinking water systems, and the possibility of comparing studies is an important tool toward this goal; however, biases due to the molecular protocol applied (i.e., DNA extraction) can work against the formulation of correct ecological hypothesis. Legislative compliance and risk assessment A reliable quantification of Legionella spp. is important to accurately control the level of the pathogen in engineered aquatic systems and to assess the risk linked to its presence. Quantitative microbial risk assessment (QMRA) uses information regarding pathogen concentration to determine health implications of microbial hazards . Thus, the concentrations of the pathogenic organism investigated are of extreme importance to establish the risk associated with its exposure. With respect to QMRA of Legionella , most studies use concentrations measured with conventional culture approaches, which likely underestimates actual concentrations and, in turn, may lead to the underestimation of risk . Molecular methods (including qPCR or ddPCR) are generally more sensitive and overcome some of the limitations of traditional culture approaches, but, as demonstrated in this study, are subject to errors arising from DNA extraction efficacy differences. Therefore, the future inclusion of molecular methods in QMRA requires careful consideration, documentation, and reporting of the entire sampling processing pipeline. Similarly, the effects of the DNA extraction on the quantification of Legionella can also have an impact on the routine monitoring for the presence of Legionella . Normally, the water phase is collected, filtered, and then plated onto buffered charcoal yeast extract (BCYE) agar plates for enumeration and culture confirmation . Investigations of the presence of Legionella in environmental samples, which can be used as a surveillance strategy, are often carried out using a culture-dependent approach. For example, several studies have reported environmental monitoring of Legionella in hospital settings over multiple years, to prevent nosocomial infections and link cases of Legionnaires’ disease with the environmental source of infection . Interventions are required when the concentration of Legionella reaches the threshold indicated by the national authorities (in Switzerland, 1,000 colony-forming units (CFU)/L . This entire process, however, is time and labor consuming as culturing Legionella typically takes 7–14 days. Moreover, culturing does not account for bacteria in viable-but-not-cultivable state . Therefore, the demand for the implementation of molecular methods (i.e., quantification of Legionella through qPCR/ddPCR) in the context of assessing water quality has increasingly spread across practitioners and authorities, and in fact, for example, the new EU legislation allows for alternative methods to be used . The implementation of such molecular methods in routine monitoring calls for a more detailed and standardized reporting of the protocols used. Our data not only demonstrates the variability in terms of concentration of Legionella when using different DNA extraction methods but also highlights how the pathogen is not detected in some samples extracted with one method, while it is quantified when the DNA is extracted with the other method used in this study. This would obviously influence the legal settings, since a wrong estimation of the concentration of Legionella due to extraction biases can lead to either unnecessary interventions (which are expensive for the practitioners) or increased risks. While being aware of the challenges involved in the standardization of methods, we strongly believe that an extensive reporting of the protocol details (e.g., DNA extraction method used, extraction efficacy, and how it was calculated) would be beneficial for a reliable comparison among studies, monitoring strategies and regulations. Ecological studies often work toward providing insights into the microbial composition of given samples under specific conditions, but an accurate detection of the organisms involved and their relative proportions are crucial to define valuable biological interpretations of the data collected. The most common reported examples of how DNA extraction affects ecological observations are related to the studies conducted on the human microbiome. For example, in a study aiming at establishing the impact of DNA extraction procedures on the assessment of human gut composition, Kennedy and colleagues demonstrated that, within individual patients, community structures clustered together based on the kit used to extract the DNA from the samples . This effect was even more pronounced depending on the distance metrics that were used, and this had an important value given that the study was aimed at showing differences in community composition between volunteers and patients with inflammatory bowel disease . In a different study involving the human oral microbiome, Lazarevic and colleagues found that while the most abundant taxa were detected in the samples extracted with both methods, for some genera, the relative abundances were significantly different depending on the kit used . Similar results were obtained in a study that compared DNA extraction kits and primer sets for freshwater sediments samples: no significant diversity in terms of community structure was, in fact, detected, but the relative abundance of specific taxa varied significantly . Moreover, this study also highlighted differences in richness and relative abundance for the eukaryotic communities detected in samples extracted with two different kits. In the context of the microbial ecology of Legionella , the relevance of this is mainly linked to the observational studies aiming at understanding how this opportunistic pathogen lives in aquatic systems through their relationship with the surrounding organisms . In a previous paper from our group, for example, we observed that Legionella spp. correlated positively and negatively with prokaryotic and eukaryotic microorganisms in biofilm samples from plumbing systems . These studies used different combinations of statistical approaches that process the sequencing outcome (often in the form of relative abundance) to infer correlations. In this context, biases introduced by the DNA extraction (in terms of community structure, abundance and amplicon sequence variants detected) can influence the analysis leading to wrong ecological interpretations. Our results, for example, demonstrate that the correlations inferred with SparCC when using the sequencing data of samples extracted with different methods produce variable outcomes, both in the number of correlations and identity of the zOTUs involved. We argue that a correct understanding of the microbial ecology is crucial for the control of the pathogen in drinking water systems, and the possibility of comparing studies is an important tool toward this goal; however, biases due to the molecular protocol applied (i.e., DNA extraction) can work against the formulation of correct ecological hypothesis. A reliable quantification of Legionella spp. is important to accurately control the level of the pathogen in engineered aquatic systems and to assess the risk linked to its presence. Quantitative microbial risk assessment (QMRA) uses information regarding pathogen concentration to determine health implications of microbial hazards . Thus, the concentrations of the pathogenic organism investigated are of extreme importance to establish the risk associated with its exposure. With respect to QMRA of Legionella , most studies use concentrations measured with conventional culture approaches, which likely underestimates actual concentrations and, in turn, may lead to the underestimation of risk . Molecular methods (including qPCR or ddPCR) are generally more sensitive and overcome some of the limitations of traditional culture approaches, but, as demonstrated in this study, are subject to errors arising from DNA extraction efficacy differences. Therefore, the future inclusion of molecular methods in QMRA requires careful consideration, documentation, and reporting of the entire sampling processing pipeline. Similarly, the effects of the DNA extraction on the quantification of Legionella can also have an impact on the routine monitoring for the presence of Legionella . Normally, the water phase is collected, filtered, and then plated onto buffered charcoal yeast extract (BCYE) agar plates for enumeration and culture confirmation . Investigations of the presence of Legionella in environmental samples, which can be used as a surveillance strategy, are often carried out using a culture-dependent approach. For example, several studies have reported environmental monitoring of Legionella in hospital settings over multiple years, to prevent nosocomial infections and link cases of Legionnaires’ disease with the environmental source of infection . Interventions are required when the concentration of Legionella reaches the threshold indicated by the national authorities (in Switzerland, 1,000 colony-forming units (CFU)/L . This entire process, however, is time and labor consuming as culturing Legionella typically takes 7–14 days. Moreover, culturing does not account for bacteria in viable-but-not-cultivable state . Therefore, the demand for the implementation of molecular methods (i.e., quantification of Legionella through qPCR/ddPCR) in the context of assessing water quality has increasingly spread across practitioners and authorities, and in fact, for example, the new EU legislation allows for alternative methods to be used . The implementation of such molecular methods in routine monitoring calls for a more detailed and standardized reporting of the protocols used. Our data not only demonstrates the variability in terms of concentration of Legionella when using different DNA extraction methods but also highlights how the pathogen is not detected in some samples extracted with one method, while it is quantified when the DNA is extracted with the other method used in this study. This would obviously influence the legal settings, since a wrong estimation of the concentration of Legionella due to extraction biases can lead to either unnecessary interventions (which are expensive for the practitioners) or increased risks. While being aware of the challenges involved in the standardization of methods, we strongly believe that an extensive reporting of the protocol details (e.g., DNA extraction method used, extraction efficacy, and how it was calculated) would be beneficial for a reliable comparison among studies, monitoring strategies and regulations. Reviewer comments
Analysis of MDM2 and TP53 genes in canine liposarcoma
165101b6-27d0-4dde-9481-b6dd7a56c5f2
11189432
Anatomy[mh]
Canine liposarcoma is a relatively rare sarcoma of dogs. It predominantly originates in the subcutaneous tissue of the body’s trunk and the proximal regions of the limbs. However, there have been reports of its occurrences in other sites, such as deep soft tissue (deeper than subcutis) and the spleen – . Canine liposarcoma can be categorized into four morphological variants that bear a striking resemblance to their human counterparts: well-differentiated, dedifferentiated, myxoid, and pleomorphic , , . In dogs, these variants primarily serve as morphological distinctions, while in humans, they represent distinct neoplastic entities with specific genetic alterations . Well-differentiated and dedifferentiated human liposarcomas are characterized by the presence of a giant ring chromosome that carries amplified copies of the MDM2 and CDK4 genes, which play essential roles in cell cycle regulation. These two tumor types are regarded as two ends of a morphological spectrum of the same neoplastic entity, distinct from myxoid and pleomorphic liposarcoma, with the dedifferentiated variant showing metastatic potential . Human myxoid liposarcoma is known to exhibit a reciprocal translocation between chromosomes 12 and 16: t(12;16)(q13;p11), resulting in the fusion of the DDIT3 gene with the FUS gene. Pleomorphic liposarcoma displays complex genetic rearrangements, and dysregulation of several tumor suppressor pathways (e.g., p53 and Rb1) is common in this subtype , . Immunohistochemical studies have recently expanded our understanding of canine liposarcoma, revealing an overexpression of tyrosine kinase receptors (TRKs), MDM2, and p53. This suggests that TRK pathways may be involved in tumor progression, that the MDM2 gene may be amplified as in humans, and that the TP53 gene could be mutated in the myxoid variant , , . However, our knowledge regarding the genetic status of canine liposarcoma remains limited, and the few cases examined in the literature have failed to demonstrate anomalies in these genes . The aim of this study is to investigate the presence of MDM2 gene amplification and TP53 gene mutations in a larger number of canine liposarcomas. We used fluorescent in situ hybridization (FISH) to assess MDM2 amplification and next-generation sequencing (NGS) to detect TP53 and MDM2 gene mutations. Fifty-one cases of canine liposarcoma were included in this study. Among these cases, fifteen dogs were female (4 spayed and 11 intact), thirty-two were male (1 castrated and 31 intact), and in 4 cases, sex was not reported. The age was available for 47 out of the 51 cases, ranging from 6 to 16 years, with a median age of 11 years. Among the liposarcomas, 41 affected the soft tissue, with 36 being subcutaneous, 3 intramuscular, and 2 intracavitary. Six cases were splenic, and in 4 cases, the specific site was not recorded. The distribution of subtypes included 26 well-differentiated, 10 myxoid, 8 pleomorphic, and 7 dedifferentiated cases. The mitotic count varied from 1 to 36, with a median of 5. Tumor grading classified 18 cases as grade 1, 28 as grade 2, and 5 as grade 3. Immunohistochemistry detected the expression of MDM2 in 21 cases (including 11 well-differentiated, 5 dedifferentiated, 3 myxoid, and 2 pleomorphic cases), and the expression of p53 in 6 cases, all of which were myxoid. NGS In four cases, there was an insufficient amount of tissue available in the paraffin blocks for NGS analysis, thus 47 cases were included in the study. These comprised 20 well-differentiated liposarcomas, 10 myxoid liposarcomas, 8 pleomorphic liposarcomas, and 8 dedifferentiated liposarcomas. A TP53 mutation was identified in 15 out of the 47 liposarcomas (31.9%), which included 2 well-differentiated, 9 myxoid, 3 pleomorphic, and 1 dedifferentiated neoplasms. Out of these, ten variants were missense single nucleotide variants (SNV) mutations, and five were small indels (Table ). Three additional variants were single nucleotide indels within a homopolymer stretch and were thus excluded from further analysis. Of the 15 mutations (2 well-differentiated, 9 myxoid, 3 pleomorphic, and 1 dedifferentiated), all except one were classified as "damaging" or "possibly damaging" using the Polyphen2 tool ( http://genetics.bwh.harvard.edu/pph2/ ). Even if PolyPhen2 was developed for the determination of variants in human beings, the analysis was conducted by performing the query with the amino acid sequence of canine reference and the corresponding positions of the variants, thus utilizing the in silico prediction of PolyPhen2. PolyPhen2 has been previously used for this type of evaluation in canine specimens in the literature – . One variant was categorized as "benign". The variant allele frequency (VAF) for these 15 mutations ranged from 8 to 76%. In two cases a nonsense mutation was detected, leading to a truncated protein at the amino acid T244 and S343; in two cases a frameshift mutation was identified (Table ). Among these 15 variants, 5 cases were immunohistochemically positive for p53 (IHC), while 10 were negative. Interestingly, one myxoid liposarcoma that tested p53 positive by IHC showed no mutation. Notably, no missense or small indel variants were detected in the MDM2 gene. FISH Out of the total of 51 cases subjected to FISH analysis, 8 cases were found to be technically inadequate and were therefore considered indeterminate and subsequently excluded. Among the remaining 43 cases, ten exhibited MDMI2 amplification (23%), with MDM2/GMCL1 ratios ranging from 2.1 to 6.5. This subset included 6 well-differentiated liposarcomas, 2 dedifferentiated, 1 myxoid, and 1 pleomorphic. The amplification pattern in 6 out of these 10 cases (60%) was identified as cluster-type (Fig. ), characterized by closely stippled, adjacent, and numerous signals forming a large cluster within a specific area of the nucleus. In contrast, the remaining 4 cases (40%) exhibited a non-clustered amplification pattern, with signals scattered equally throughout the nucleus. Among the 10 amplified cases, 4 tested positive for MDM2 by immunohistochemistry, while 6 were negative. Most of the cases, specifically 34 out of 43 (77%), showed no MDM2 amplification and exhibited a diploid signal pattern. Notably, no cases displaying polysomy were observed. Among the 33 cases without MDM2 amplification, 13 were immunohistochemically positive for MDM2, while 20 were negative. Statistical analysis The occurrence of myxoid liposarcoma was significantly higher in the spleen compared to other tissues (p = 0.003), whereas the other subtypes were predominantly observed in the subcutis. Myxoid liposarcoma and dedifferentiated liposarcoma exhibited significantly higher grades in comparison to pleomorphic liposarcoma and well-differentiated liposarcoma (p = 0.0004). However, there was no significant association between the histological grade and the tumor site (p = 0.79). A statistically significant association was found between immunohistochemical p53 positivity and TP53 mutation (p = 0.015). However, there was no statistically significant association between MDM2 positivity and MDM2 amplification or between MDM2 and p53 immunohistochemical expression (p = 0.92 and p = 0.38, respectively). Statistical analysis revealed a strong association of the myxoid subtype (ML) with p53 immunohistochemical expression (p = 0.00003) and TP53 mutation (p = 0.00005) (Figs. and ). No association was found between histological subtypes and MDM2 immunohistochemical expression or gene amplification (p = 0.29 and p = 0.50, respectively). Similarly, there was no association between histological grade and MDM2 anomalies as detected by IHC and FISH (p = 0.79 and p = 0.91, respectively). However, the histological grade was significantly higher in p53-positive cases (p = 0.005), but not in cases with TP53 mutation (p = 0.18). Mitotic count (MC) was significantly higher in myxoid (mean 16.5 ± 11.6) and dedifferentiated liposarcoma (mean 15.8 ± 11.09) in comparison to pleomorphic (mean 5.1 ± 2.03) and well differentiated liposarcoma (2.8 ± 1.67). Furthermore, MC was significantly higher in p53-positive liposarcomas (p = 0.00001) and in those with TP53 mutation (p = 0.04). In four cases, there was an insufficient amount of tissue available in the paraffin blocks for NGS analysis, thus 47 cases were included in the study. These comprised 20 well-differentiated liposarcomas, 10 myxoid liposarcomas, 8 pleomorphic liposarcomas, and 8 dedifferentiated liposarcomas. A TP53 mutation was identified in 15 out of the 47 liposarcomas (31.9%), which included 2 well-differentiated, 9 myxoid, 3 pleomorphic, and 1 dedifferentiated neoplasms. Out of these, ten variants were missense single nucleotide variants (SNV) mutations, and five were small indels (Table ). Three additional variants were single nucleotide indels within a homopolymer stretch and were thus excluded from further analysis. Of the 15 mutations (2 well-differentiated, 9 myxoid, 3 pleomorphic, and 1 dedifferentiated), all except one were classified as "damaging" or "possibly damaging" using the Polyphen2 tool ( http://genetics.bwh.harvard.edu/pph2/ ). Even if PolyPhen2 was developed for the determination of variants in human beings, the analysis was conducted by performing the query with the amino acid sequence of canine reference and the corresponding positions of the variants, thus utilizing the in silico prediction of PolyPhen2. PolyPhen2 has been previously used for this type of evaluation in canine specimens in the literature – . One variant was categorized as "benign". The variant allele frequency (VAF) for these 15 mutations ranged from 8 to 76%. In two cases a nonsense mutation was detected, leading to a truncated protein at the amino acid T244 and S343; in two cases a frameshift mutation was identified (Table ). Among these 15 variants, 5 cases were immunohistochemically positive for p53 (IHC), while 10 were negative. Interestingly, one myxoid liposarcoma that tested p53 positive by IHC showed no mutation. Notably, no missense or small indel variants were detected in the MDM2 gene. Out of the total of 51 cases subjected to FISH analysis, 8 cases were found to be technically inadequate and were therefore considered indeterminate and subsequently excluded. Among the remaining 43 cases, ten exhibited MDMI2 amplification (23%), with MDM2/GMCL1 ratios ranging from 2.1 to 6.5. This subset included 6 well-differentiated liposarcomas, 2 dedifferentiated, 1 myxoid, and 1 pleomorphic. The amplification pattern in 6 out of these 10 cases (60%) was identified as cluster-type (Fig. ), characterized by closely stippled, adjacent, and numerous signals forming a large cluster within a specific area of the nucleus. In contrast, the remaining 4 cases (40%) exhibited a non-clustered amplification pattern, with signals scattered equally throughout the nucleus. Among the 10 amplified cases, 4 tested positive for MDM2 by immunohistochemistry, while 6 were negative. Most of the cases, specifically 34 out of 43 (77%), showed no MDM2 amplification and exhibited a diploid signal pattern. Notably, no cases displaying polysomy were observed. Among the 33 cases without MDM2 amplification, 13 were immunohistochemically positive for MDM2, while 20 were negative. The occurrence of myxoid liposarcoma was significantly higher in the spleen compared to other tissues (p = 0.003), whereas the other subtypes were predominantly observed in the subcutis. Myxoid liposarcoma and dedifferentiated liposarcoma exhibited significantly higher grades in comparison to pleomorphic liposarcoma and well-differentiated liposarcoma (p = 0.0004). However, there was no significant association between the histological grade and the tumor site (p = 0.79). A statistically significant association was found between immunohistochemical p53 positivity and TP53 mutation (p = 0.015). However, there was no statistically significant association between MDM2 positivity and MDM2 amplification or between MDM2 and p53 immunohistochemical expression (p = 0.92 and p = 0.38, respectively). Statistical analysis revealed a strong association of the myxoid subtype (ML) with p53 immunohistochemical expression (p = 0.00003) and TP53 mutation (p = 0.00005) (Figs. and ). No association was found between histological subtypes and MDM2 immunohistochemical expression or gene amplification (p = 0.29 and p = 0.50, respectively). Similarly, there was no association between histological grade and MDM2 anomalies as detected by IHC and FISH (p = 0.79 and p = 0.91, respectively). However, the histological grade was significantly higher in p53-positive cases (p = 0.005), but not in cases with TP53 mutation (p = 0.18). Mitotic count (MC) was significantly higher in myxoid (mean 16.5 ± 11.6) and dedifferentiated liposarcoma (mean 15.8 ± 11.09) in comparison to pleomorphic (mean 5.1 ± 2.03) and well differentiated liposarcoma (2.8 ± 1.67). Furthermore, MC was significantly higher in p53-positive liposarcomas (p = 0.00001) and in those with TP53 mutation (p = 0.04). In this study, we have conducted the analysis of MDM2 and TP53 genes in a case series of canine liposarcoma. The cases examined encompassed 51 cases and comprised all the histologic variants reported in dogs. This investigation revealed some notable clinical features that differ from liposarcoma in humans. It is well-established that liposarcoma is a relatively rare tumor in dogs compared to humans, where it represents one of the most frequent soft tissue sarcomas . Interestingly, in humans myxoid liposarcoma is more commonly found in the lower extremities, particularly affecting the thighs while intra-abdominal cases are less common . In contrast, in our dataset, half of the myxoid liposarcomas developed in the spleen, with only 2 out of 10 occurring in the soft tissue of the extremities. Furthermore, the majority of splenic liposarcomas (5 out of 6) were myxoid, suggesting that the spleen may be a preferred site for this specific sarcoma in dogs. TP53 mutations were prevalent in 9 out of 10 (90%) myxoid liposarcomas, while they were less frequent in other subtypes (7.7% of well-differentiated, 37.5% of pleomorphic, and 14.3% of dedifferentiated cases). Among these 9 mutations, eight were predicted to be "damaging" or "probably damaging," while one was categorized as "benign." This differs from findings in human liposarcoma, where TP53 mutations are described in 60% of pleomorphic liposarcomas and in approximately 20% of myxoid liposarcomas . Notably, a study assessing p53 immunohistochemical expression in human myxoid liposarcomas found staining predominantly in highly cellular and round-cell areas, more frequently in high-grade tumors than low-grade ones . Contrarily in this caseload we did not observe a preferential p53 staining in round cell areas and a statistically significant association between p53 overexpression and grade was identified. Supporting the impact of p53 alterations on tumor biological aggressiveness is the statistically significant association of the MC with both p53 staining and TP53 mutation. However, it should be noted that in our study, 10 out of 15 cases with a TP53 mutation did not exhibit p53 overexpression at the immunohistochemical level. Interestingly, none of these p53-negative cases were bearing a nonsense or frameshift mutation. Therefore, this discrepancy may be attributed to reduced antibody sensitivity, although it has been shown to cross-react with the canine species . The same antibody yielded similar results in a study examining p53 alterations in canine malignancies and should therefore be interpreted with caution when used to infer the mutation status of TP53 incanine tumors. Furthermore, in one immunopositive case, a TP53 variant was not identified. This incongruity may also be related to an antibody-related issue or less likely, to other molecular anomalies leading to p53 stabilization within the nucleus of neoplastic cells. Interestingly, while the expression of p53 and TP53 mutation has been well-documented in canine osteosarcoma , studies in canine soft tissue sarcomas are limited and have not reported associations with specific subtypes , . The 15 mutations identified within our cohort were uniformly distributed across all the exons of the TP53 gene. Interestingly, no variants were detected in only three exons (2, 7, 8), while in all other exons, at least one substitution was identified. This emphasizes the significance of analyzing the entire coding sequence (CDS) of the TP53 gene and highlights that focusing solely on specific exons may result in an underestimation of potential mutations. Immunohistochemical expression of MDM2 was identified in 21 out of 51 cases (41.17%), with the majority found in well-differentiated and dedifferentiated cases, accounting for approximately 70% of these two histotypes. This result aligns partially with the human literature, where immunohistochemistry for MDM2 is often regarded as a surrogate marker for the detection of MDM2 gene amplification, which is diagnostic for these two liposarcoma subtypes . The immunohistochemical expression of MDM2 in canine liposarcoma has been reported, and similar to what is described in people it was interpreted as suggestive of gene amplification . Nevertheless, data regarding MDM2 amplification in canine soft tissue sarcomas are scarce, and only a few cases of amplification have been demonstrated by Southern blot in rhabdomyosarcoma and malignant nerve sheath tumors, while the few cases of canine liposarcoma tested did not exhibit MDM2 amplification . In this study, MDM2 amplification status was assessed for the first time in canine liposarcoma using FISH. MDM2 amplification was detected in 10 out of 43 cases (23%), with the majority found in well-differentiated and dedifferentiated cases; however, 6 of these cases were negative for MDM2 immunohistochemistry. Conversely, 13 out of 33 non-amplified cases were positive at the immunohistochemistry level. These findings lead to the conclusion that neither the assessment of MDM2 amplification by FISH nor the immunohistochemical evaluation of MDM2 protein expression can be considered specific for well-differentiated and dedifferentiated liposarcoma in dogs. These discrepancies may be attributed to sensitivity and specificity issues related to the use of probes and antibodies designed for human tissues, which, despite their cross-reactivity, may not perform optimally in canine tissues. Another hypothesis is that canine liposarcomas, despite their morphological similarities to the human counterpart, harbor a different set of genetic and proteomic alterations distinct from those of the human counterpart and that MDM2 protein expression may result, rather than from genetic amplification, from increased transcription or reduced degradation. In summary, our findings indicate that, despite the morphological similarities between canine and its human counterpart, it appears that MDM2 amplification is not a defining feature of canine liposarcoma, although it may occur in a minority of cases, and MDM2 protein expression could potentially contribute to its oncogenic processes. Furthermore, canine myxoid liposarcoma likely represents a distinct disease rather than a mere morphological variant, which is characterized by TP53 mutations and a preference for involvement in the spleen. Case selection and histologic evaluation Cases of histologically diagnosed canine liposarcoma were retrospectively retrieved from the archives of multiple institutions (three universities and one private diagnostic laboratory). Cases lacking sufficient paraffin-embedded tissue were excluded from the study. Subsequently, all the slides were collaboratively reviewed by two veterinary pathologists (GA and VP) using a multiheaded microscope. The diagnosis was confirmed and tumors were categorized based on histomorphology, according to the most recent classification system . Mitotic count was determined as the total number of mitotic figures within 10 consecutive, non-overlapping high-power fields (HPF), corresponding to the standard area of 2.37 mm 2 in the most cellular and proliferative regions of the tumor. The grade was determined using the grading system currently used in veterinary and human medicine . Immunohistochemistry Three-micrometer-thick sections underwent dewaxing and rehydration. To block endogenous peroxidase activity, they were immersed in a 3% H2O2 solution in methanol for 30 min. Subsequently, the sections were rinsed in Tris buffer (pH 7.0). For antigen retrieval, the sections were placed in citrate buffer (pH 6.0) and heated in a microwave oven at 750 W for 2 cycles of 5 min each. Afterward, they were allowed to cool at room temperature for 20 min. Specific antibodies for MDM2 (mouse monoclonal, clone 2A10, dilution 1:100, Abcam, Cambridge, UK) and p53 (mouse monoclonal, clone PAb 240, dilution 1:100, BD Bioscience) were applied and allowed to incubate overnight at 4 °C. Following this, the sections were incubated for 30 min at room temperature with the appropriate biotin-conjugated secondary antibody (dilution 1:200, Dako, Glostrup, Denmark). To enhance the reaction, an avidin–biotin method (ABC kit elite, Vector, Burlingame, CA, USA) was employed, and visualization was achieved using 3,3ʹ-diaminobenzidine (0.04% for 4 min). The sections were counterstained with Harris hematoxylin, rinsed in tap water, dehydrated, and cover-slipped. Positive controls consisted of sections from normal canine testis for MDM2 and sections from canine mammary carcinoma, where p53 expression was known, for p53. Negative controls included slides incubated with a non-specific antibody or the omission of the primary antibody. MDM2 positivity was defined as the presence of at least one positive nucleus per high-power field (40× magnification; 0.237 mm 2 ), while p53 positivity was determined when more than 5% of neoplastic cells exhibited nuclear staining, as previously documented , . Next generation sequencing DNA was extracted from 2 to 4 tissue sections, each measuring 10 µm in thickness, and mounted on slides. The samples were manually scraped using a sterile blade, focusing on the area selected by the pathologist as the most representative on the hematoxylin and eosin-stained slide. DNA amplification was carried out using an amplicon-based laboratory-developed NGS panel, enabling the amplification and sequencing of the entire Coding Sequence (CDS) of the TP53 and MDM2 genes (reference Canisfamiliaris 3, 48 amplicons, size 4.73 kb). TP53 CDS and MDM2 CDS were the only regions included in the panel. Using a targeted NGS panel has allowed to have a analytical sensitivity (10%) and costs not exceeding €300 per sample. Moreover, this panel could be easily carried into clinical practice. The median coverage obtained in this cohort using this panel was 2425×. The obtained results were analyzed using Variant Caller tool (Thermo Fisher Scientific), Integrative Genomics Viewer (IGV—v.2.12.2) and GenomeBrowse tools ( https://www.goldenhelix.com/products/GenomeBrowse/index.html ). Only mutations with a variant allele frequency (VAF) exceeding 5% were considered for mutational calls. Bam files of the obtained sequences were loaded on the BioPrject dataset (BioProject ID: PRJNA1118484). Tissue microarray Tissue microarrays (TMA) were constructed following a previously published method – . Two cores were sampled for each case, each core measuring 3 mm in diameter. Care was taken to select areas devoid of necrosis and inflammation. Nine double-core TMA blocks were assembled, with each one containing six cases, except for the last TMA which contained three cases, and one orientation core consisting of normal hepatic parenchyma. Fluorescence in situ hybridization To adapt commercially available FISH probes designed for human tissue for use with canine samples, the homology between the sequences of the canine and human MDM2 gene was assessed using the database BLAST (Basic Local Alignment Search Tool—NCBI). The alignment revealed a 92.46% homology between the MDM2 gene sequences of both species, thus allowing the use of human commercial probes. Furthermore, to rule out polysomy, a suitable housekeeping gene on the same MDM2 chromosome (CFA 10) was sought. The selection criteria included: No known involvement in the tumorigenesis of human and canine liposarcoma. Location on CFA10. High nucleotide sequence homology between canine and human species. The GMCL1 gene (germ cell-less, spermatogenesis-associated 1) met these criteria, demonstrating a 95.55% sequence homology between human and canine species according to BLAST analysis. The tumors underwent fluorescence in situ hybridization (FISH) using a dual-core tissue microarray. The Easy FISH Pretreatment Kit (OACP IE LTD, Cork, Ireland) was utilized. The sections were initially incubated at 75 °C for 5 min in the hybridization plate. Subsequently, the sections underwent dewaxing, dehydration, air-drying, and then incubation with a permeation solution in a water bath at 90 °C for 8 min, followed by incubation in pepsin and HCl solution at 37 °C for 19 min. The sections were then washed in a washing buffer for 5 min, and dehydration was carried out in 2-min steps using 70%, 85%, and 100% ethanol. Finally, the sections were air-dried at room temperature. The MDM2 gene copy number was identified using the MDM2 spectrum Orange FISH probe (catalogue number FP-054, TITAN FISH probe, OACP IE LTD, Cork, Ireland) and the Smart-ISH Solve buffer OACP IE LTD, Cork, Ireland). The GMCL1 probe, combined with its corresponding buffer (GMCL1 probe set spectrum green, catalogue number GMCL1-10-GR, Empire Genomic), was used as a control to confirm the presence of polysomy. The hybridization area was then cover-slipped and sealed with rubber cement . The slides were incubated at 85 °C for 5 min for DNA denaturation and at 42 °C overnight for hybridization with the MDM2 probe; at 83 °C for 3 min for DNA denaturation and at 37 °C overnight for hybridization with the GMCL1 probe. Following this, the slides were washed in NP40 0.5%/2 × SSC (pH 7.0–7.5) at 75 °C for 2 min and in washing buffer for 2 min at room temperature. The slides were then dehydrated and counterstained using DAPI counterstaining solution (OACP IE LTD, Cork, Ireland). The specificity of in-situ hybridization was further evaluated by considering the euploidy of the fibroblasts and lymphocytes adjacent to the neoplastic cells. Evaluation of the signals' numbers and quality was made by using an Olympus BX61 fluorescent microscope equipped with relevant filters and objectives by 2 independent operators (EDO, LVM), and then images were obtained. The Cytovision Image analysis software (The CytoVision®) was employed to additionally count the number of gene copies per nucleus in the available nuclei with visible signals. FISH assessment was conducted in accordance with the MDM2 amplification patterns observed in human liposarcoma . A MDM2/GMCL1 ratio higher than 2 was considered indicative of MDM2 amplification. Samples were deemed indeterminate for MDM2 if technical issues prevented them from being reported as either positive or negative. Statistical analysis Categorical variables were reported as percentages, while for continuous variables, mean, median, standard deviation (SD), and range were provided. Fisher's test was performed to assess associations between categorical variables. One-way ANOVA was employed to determine associations between mitotic count (MC) and other categorical variables. Results were considered significant at a threshold of p ≥ 0.05. Statistical analysis was conducted using R (version 4.2.0). Cases of histologically diagnosed canine liposarcoma were retrospectively retrieved from the archives of multiple institutions (three universities and one private diagnostic laboratory). Cases lacking sufficient paraffin-embedded tissue were excluded from the study. Subsequently, all the slides were collaboratively reviewed by two veterinary pathologists (GA and VP) using a multiheaded microscope. The diagnosis was confirmed and tumors were categorized based on histomorphology, according to the most recent classification system . Mitotic count was determined as the total number of mitotic figures within 10 consecutive, non-overlapping high-power fields (HPF), corresponding to the standard area of 2.37 mm 2 in the most cellular and proliferative regions of the tumor. The grade was determined using the grading system currently used in veterinary and human medicine . Three-micrometer-thick sections underwent dewaxing and rehydration. To block endogenous peroxidase activity, they were immersed in a 3% H2O2 solution in methanol for 30 min. Subsequently, the sections were rinsed in Tris buffer (pH 7.0). For antigen retrieval, the sections were placed in citrate buffer (pH 6.0) and heated in a microwave oven at 750 W for 2 cycles of 5 min each. Afterward, they were allowed to cool at room temperature for 20 min. Specific antibodies for MDM2 (mouse monoclonal, clone 2A10, dilution 1:100, Abcam, Cambridge, UK) and p53 (mouse monoclonal, clone PAb 240, dilution 1:100, BD Bioscience) were applied and allowed to incubate overnight at 4 °C. Following this, the sections were incubated for 30 min at room temperature with the appropriate biotin-conjugated secondary antibody (dilution 1:200, Dako, Glostrup, Denmark). To enhance the reaction, an avidin–biotin method (ABC kit elite, Vector, Burlingame, CA, USA) was employed, and visualization was achieved using 3,3ʹ-diaminobenzidine (0.04% for 4 min). The sections were counterstained with Harris hematoxylin, rinsed in tap water, dehydrated, and cover-slipped. Positive controls consisted of sections from normal canine testis for MDM2 and sections from canine mammary carcinoma, where p53 expression was known, for p53. Negative controls included slides incubated with a non-specific antibody or the omission of the primary antibody. MDM2 positivity was defined as the presence of at least one positive nucleus per high-power field (40× magnification; 0.237 mm 2 ), while p53 positivity was determined when more than 5% of neoplastic cells exhibited nuclear staining, as previously documented , . DNA was extracted from 2 to 4 tissue sections, each measuring 10 µm in thickness, and mounted on slides. The samples were manually scraped using a sterile blade, focusing on the area selected by the pathologist as the most representative on the hematoxylin and eosin-stained slide. DNA amplification was carried out using an amplicon-based laboratory-developed NGS panel, enabling the amplification and sequencing of the entire Coding Sequence (CDS) of the TP53 and MDM2 genes (reference Canisfamiliaris 3, 48 amplicons, size 4.73 kb). TP53 CDS and MDM2 CDS were the only regions included in the panel. Using a targeted NGS panel has allowed to have a analytical sensitivity (10%) and costs not exceeding €300 per sample. Moreover, this panel could be easily carried into clinical practice. The median coverage obtained in this cohort using this panel was 2425×. The obtained results were analyzed using Variant Caller tool (Thermo Fisher Scientific), Integrative Genomics Viewer (IGV—v.2.12.2) and GenomeBrowse tools ( https://www.goldenhelix.com/products/GenomeBrowse/index.html ). Only mutations with a variant allele frequency (VAF) exceeding 5% were considered for mutational calls. Bam files of the obtained sequences were loaded on the BioPrject dataset (BioProject ID: PRJNA1118484). Tissue microarrays (TMA) were constructed following a previously published method – . Two cores were sampled for each case, each core measuring 3 mm in diameter. Care was taken to select areas devoid of necrosis and inflammation. Nine double-core TMA blocks were assembled, with each one containing six cases, except for the last TMA which contained three cases, and one orientation core consisting of normal hepatic parenchyma. To adapt commercially available FISH probes designed for human tissue for use with canine samples, the homology between the sequences of the canine and human MDM2 gene was assessed using the database BLAST (Basic Local Alignment Search Tool—NCBI). The alignment revealed a 92.46% homology between the MDM2 gene sequences of both species, thus allowing the use of human commercial probes. Furthermore, to rule out polysomy, a suitable housekeeping gene on the same MDM2 chromosome (CFA 10) was sought. The selection criteria included: No known involvement in the tumorigenesis of human and canine liposarcoma. Location on CFA10. High nucleotide sequence homology between canine and human species. The GMCL1 gene (germ cell-less, spermatogenesis-associated 1) met these criteria, demonstrating a 95.55% sequence homology between human and canine species according to BLAST analysis. The tumors underwent fluorescence in situ hybridization (FISH) using a dual-core tissue microarray. The Easy FISH Pretreatment Kit (OACP IE LTD, Cork, Ireland) was utilized. The sections were initially incubated at 75 °C for 5 min in the hybridization plate. Subsequently, the sections underwent dewaxing, dehydration, air-drying, and then incubation with a permeation solution in a water bath at 90 °C for 8 min, followed by incubation in pepsin and HCl solution at 37 °C for 19 min. The sections were then washed in a washing buffer for 5 min, and dehydration was carried out in 2-min steps using 70%, 85%, and 100% ethanol. Finally, the sections were air-dried at room temperature. The MDM2 gene copy number was identified using the MDM2 spectrum Orange FISH probe (catalogue number FP-054, TITAN FISH probe, OACP IE LTD, Cork, Ireland) and the Smart-ISH Solve buffer OACP IE LTD, Cork, Ireland). The GMCL1 probe, combined with its corresponding buffer (GMCL1 probe set spectrum green, catalogue number GMCL1-10-GR, Empire Genomic), was used as a control to confirm the presence of polysomy. The hybridization area was then cover-slipped and sealed with rubber cement . The slides were incubated at 85 °C for 5 min for DNA denaturation and at 42 °C overnight for hybridization with the MDM2 probe; at 83 °C for 3 min for DNA denaturation and at 37 °C overnight for hybridization with the GMCL1 probe. Following this, the slides were washed in NP40 0.5%/2 × SSC (pH 7.0–7.5) at 75 °C for 2 min and in washing buffer for 2 min at room temperature. The slides were then dehydrated and counterstained using DAPI counterstaining solution (OACP IE LTD, Cork, Ireland). The specificity of in-situ hybridization was further evaluated by considering the euploidy of the fibroblasts and lymphocytes adjacent to the neoplastic cells. Evaluation of the signals' numbers and quality was made by using an Olympus BX61 fluorescent microscope equipped with relevant filters and objectives by 2 independent operators (EDO, LVM), and then images were obtained. The Cytovision Image analysis software (The CytoVision®) was employed to additionally count the number of gene copies per nucleus in the available nuclei with visible signals. FISH assessment was conducted in accordance with the MDM2 amplification patterns observed in human liposarcoma . A MDM2/GMCL1 ratio higher than 2 was considered indicative of MDM2 amplification. Samples were deemed indeterminate for MDM2 if technical issues prevented them from being reported as either positive or negative. Categorical variables were reported as percentages, while for continuous variables, mean, median, standard deviation (SD), and range were provided. Fisher's test was performed to assess associations between categorical variables. One-way ANOVA was employed to determine associations between mitotic count (MC) and other categorical variables. Results were considered significant at a threshold of p ≥ 0.05. Statistical analysis was conducted using R (version 4.2.0). Supplementary Information.
Delivery of paediatric rheumatology care: a survey of current clinical practice in Southeast Asia and Asia-Pacific regions
c9d523f4-923a-4860-9f18-d04442c01c93
7824936
Pediatrics[mh]
Paediatric rheumatic diseases encompass a spectrum of inflammatory conditions including juvenile idiopathic arthritis (JIA). These conditions remain the leading cause of acquired disability in children ; risk factors for worse outcome include delay to diagnosis, poor access to appropriate therapies, inadequate specialist services, lack of relevant guidelines being available and children living in countries with worse socioeconomic status . There are severe workforce challenges across the globe but especially so in Asia with one paediatric rheumatologist for every 26 million children ; this contrast markedly with the recommendations for Europe and North America with one paediatric rheumatologist per 0.42 and 0.25 million children respectively . Although an increase in numbers of rheumatologists and rheumatology trainees were reported , there were still limited access to paediatric rheumatologists in Southeast Asia ; thus most children with rheumatic diseases were treated mainly by adult rheumatologists and general paediatricians . Currently, several recommendations for standards of care as well as treatment for children with rheumatic diseases have been developed by international paediatric rheumatology associations consisting of experts mainly from high resource income countries (HRIC) . These guidelines are not always transferable to clinical practice in middle resource income countries (MRIC) and low resource income countries (LRIC) where there are limited resources and other health care challenges as priorities for health services . The Juvenile Arthritis Management in less resourced countries (JAMLess) recommendations were the first to be aimed at LRIC and highlighted principles to support and develop paediatric rheumatology (PR) including the need to contextually relevant guidance for clinical management, treatments, referrals, monitoring, education and training, advocacy, networks, policy and research. The JAMLess was originally intended for developing recommendations in less resourced countries and focused on JIA, although many of the questions were generic to service delivery in PR . The aims of this study were to build on the previous work from JAMLess to identify and describe the challenges and potential solutions to improve the patient care and raise awareness of paediatric rheumatic diseases in Southeast Asia and Asia-Pacific Countries (SE ASIA/ASIAPAC). The anonymised online survey was developed in collaboration with members of the JAMLess group (CS, HF), using essentially the same questionnaire with 27 items, divided into two major themes; PR awareness-clinical care and PR training programmes among SE ASIA/ASIAPAC. The online survey was piloted and then distributed to clinicians (doctors, nurses, allied health professionals (AHPs)) in the SE ASIA/ASIAPAC regions; recipients were known to be involved in PR clinical care or through general paediatric networks known to have potential to be exposed to children with rheumatic diseases. Participants were sent the link to the survey using existing social media professional groups (WhatsApp™) or by email and were asked to share the link of the survey. No reminders were sent out. The data were collected electronically through the survey online via Survey-Monkey™ between March and July 2019. Anonymity and confidentiality were maintained for all participants throughout the survey. The survey included questions about the participant (job role, country of work, health care setting, duration of practice, postgraduate PR training, percentage of time devoted to PR patients), opinion about barriers to diagnosis of paediatric rheumatic patients, access to medications including disease modifying antirheumatic drugs (DMARDs) and biological drugs, provision of the multidisciplinary team (MDT) and any additional challenges or barriers affecting clinical care with free text comments. There were also questions about PR training and the type of PR teaching offered (e.g lectures, clinical examination skills and clinical rotation opportunities). Descriptive statistics were used to analyse and present the survey results using collation software provided by Survey-Monkey™. There were 340 participants from a total of 14 countries (Fig. ); the total number of invited participants was unknown so a response rate could not be calculated. The majority of respondents, 261/340 (77.2%) were involved in PR care; most were general paediatricians (52.1%), followed by adult rheumatologists (18.5%), paediatric rheumatologists (15%), and ‘other’ specialists (11.2%; 16 paediatric nephrologists, 5 paediatric allergists, 4 paediatric orthopaedic surgeons, 3 neonatologists, 3 paediatric cardiologists, 1 paediatric haemato-oncologist, 1 paediatric infectious disease specialist, 1 paediatric pulmonologist, and 4 others not identified), 5 general practitioners (1.5%) and others (1.5%); 1 nurse, 2 medical students, 2 medical officers respectively, as shown in Table . The term ‘specialist’ is conventionally defined as a physician with specialist certification in their respective country. The majority of participants (59.1%) devoted < 25% of their time to caring children with rheumatic diseases. The duration of clinical practice was broad ranging from < 5 to > 40 years with the majority (27.4%) being 5 to 10 years. Most (41.5%) participants worked in government-funded (public sector) practices, 38.8% in academic centres/ teaching hospitals, and 28.5% in private practice, with the remainder in a mix of state-run/government-funded and private practice. The details of practice settings, number of years in practice and time devoted to PR care as shown in Table . Paediatric rheumatology awareness and clinical care delivery Participants reported that paediatric rheumatologists mainly cared for children with rheumatic diseases (44.9%), followed by general paediatricians (38%) and adult rheumatologists (8.1%). The MDT members involved in patient care included general paediatricians (80.8%), paediatric rheumatologists (57.3%), physiotherapists (49.2%), occupational therapists (31.2%), adult rheumatologists (27.8%) and specialist rheumatology nurses (15%) as shown in Fig. . The perceived five main barriers to prompt diagnosis of paediatric rheumatic diseases were: 1) insufficient training about childhood rheumatic diseases amongst paediatricians and other AHPs (64.1%): 2) lack of awareness of paediatric rheumatic diseases amongst AHPs (53.4%): 3) lack of awareness of paediatric rheumatic diseases within the general public population (51.3%): 4) lack of specialised paediatric rheumatologists to refer patients to (49.2%): and 5) lack of paediatric rheumatology MDT (48.3%) as shown in Fig. . The main barriers to providing a specialised multidisciplinary PR service in the clinical settings were the absence or inadequacy of provision of specialists (68.2%), the absence or inadequacy of provision of AHPs (49.8%) and financial constraints (43.8%). There was very variable access to medications in the countries represented in the survey as shown in Tables and . Most countries (with the exception of Laos) had access to DMARDS, parenteral corticosteroids and intra-articular steroids (albeit different corticosteroid preparations available). Access to biological therapies was very variable with Singapore having access to all biologics and many other countries having access to few or none (e.g. Indonesia, Laos, Vietnam and Nepal). There was generally very low accessibility to biosimilars; availability of tumour necrosis factor biosimilars (Etanercept, Infliximab) and rituximab biosimilar were reported in Australia, India, Malaysia, Pakistan, the Philippines, Singapore and Thailand whilst other countries reported having access to none. The majority of participants reported financial constraints (62.1%) being the main barrier to accessing biological drugs for patients even if they were available in their countries, followed by non-availability of biological drugs (37.1%) and absence of appropriate specialists to prescribe biological drugs (34.4%). The main challenges affecting clinical care included 1) low socioeconomic status (69.6%), 2) a general delay to access health care system (63.1%), 3) comorbidities such as infection burden (46.1%) and limited access to physical therapy such as physiotherapy and occupational therapy (46.1%), as shown in Fig. . Finally, all participants were asked to share their opinions on how to improve PR clinical care delivery in their setting. Most agreed that there was need for more paediatric rheumatologists (84.9%), specialist therapists (74.8%) and rheumatology nurses (65.1%) as well as more paediatric MSK training programmes for paediatricians and family medicine physicians (73.4%) to raise awareness and facilitate diagnosis and referral. Paediatric rheumatology training From this survey there appear to be limited opportunities for PR education including lectures, teaching of MSK examination skills and clinical rotation opportunities at the teaching hospitals or universities at both undergraduate and postgraduate levels. There was very low training especially to nurses, AHPs as well as adult rheumatology and general practice trainees. In terms of postgraduate training, the main perceived barriers to PR training were lack of a critical mass of trained paediatric rheumatologists to supervise trainees (42.2%), lack of interested applicants to the programme (33.6%) and lack of funding for PR training positions (33.2%) as shown in Fig. . These barriers led to no existing PR training programme for 62.6% of the participants in the country where they were in clinical practice. Participants reported that paediatric rheumatologists mainly cared for children with rheumatic diseases (44.9%), followed by general paediatricians (38%) and adult rheumatologists (8.1%). The MDT members involved in patient care included general paediatricians (80.8%), paediatric rheumatologists (57.3%), physiotherapists (49.2%), occupational therapists (31.2%), adult rheumatologists (27.8%) and specialist rheumatology nurses (15%) as shown in Fig. . The perceived five main barriers to prompt diagnosis of paediatric rheumatic diseases were: 1) insufficient training about childhood rheumatic diseases amongst paediatricians and other AHPs (64.1%): 2) lack of awareness of paediatric rheumatic diseases amongst AHPs (53.4%): 3) lack of awareness of paediatric rheumatic diseases within the general public population (51.3%): 4) lack of specialised paediatric rheumatologists to refer patients to (49.2%): and 5) lack of paediatric rheumatology MDT (48.3%) as shown in Fig. . The main barriers to providing a specialised multidisciplinary PR service in the clinical settings were the absence or inadequacy of provision of specialists (68.2%), the absence or inadequacy of provision of AHPs (49.8%) and financial constraints (43.8%). There was very variable access to medications in the countries represented in the survey as shown in Tables and . Most countries (with the exception of Laos) had access to DMARDS, parenteral corticosteroids and intra-articular steroids (albeit different corticosteroid preparations available). Access to biological therapies was very variable with Singapore having access to all biologics and many other countries having access to few or none (e.g. Indonesia, Laos, Vietnam and Nepal). There was generally very low accessibility to biosimilars; availability of tumour necrosis factor biosimilars (Etanercept, Infliximab) and rituximab biosimilar were reported in Australia, India, Malaysia, Pakistan, the Philippines, Singapore and Thailand whilst other countries reported having access to none. The majority of participants reported financial constraints (62.1%) being the main barrier to accessing biological drugs for patients even if they were available in their countries, followed by non-availability of biological drugs (37.1%) and absence of appropriate specialists to prescribe biological drugs (34.4%). The main challenges affecting clinical care included 1) low socioeconomic status (69.6%), 2) a general delay to access health care system (63.1%), 3) comorbidities such as infection burden (46.1%) and limited access to physical therapy such as physiotherapy and occupational therapy (46.1%), as shown in Fig. . Finally, all participants were asked to share their opinions on how to improve PR clinical care delivery in their setting. Most agreed that there was need for more paediatric rheumatologists (84.9%), specialist therapists (74.8%) and rheumatology nurses (65.1%) as well as more paediatric MSK training programmes for paediatricians and family medicine physicians (73.4%) to raise awareness and facilitate diagnosis and referral. From this survey there appear to be limited opportunities for PR education including lectures, teaching of MSK examination skills and clinical rotation opportunities at the teaching hospitals or universities at both undergraduate and postgraduate levels. There was very low training especially to nurses, AHPs as well as adult rheumatology and general practice trainees. In terms of postgraduate training, the main perceived barriers to PR training were lack of a critical mass of trained paediatric rheumatologists to supervise trainees (42.2%), lack of interested applicants to the programme (33.6%) and lack of funding for PR training positions (33.2%) as shown in Fig. . These barriers led to no existing PR training programme for 62.6% of the participants in the country where they were in clinical practice. This is the first survey describing PR clinical care and training in SE ASIA/ASIAPAC regions and highlighted multiple challenges. Our results demonstrated the paucity of trained paediatric rheumatologists and specialist MDTs as the main perceived barrier to improving PR clinical care. Paediatric rheumatologists also have key roles in education and training leadership, advocacy, policy development and research which are likely to impact on clinical care capacity building. Previous studies from mostly HRIC and MRIC regions demonstrate the global scarcity of paediatric rheumatologists ; the scarcity is most severe in Africa and Asia, unfortunately in the most populous countries and where there are large numbers of children affected . More PR training programmes are needed and there is need to encourage and support paediatricians to further their training in PR although remains a major challenge given other health care priorities and the lack of paediatricians in many LRIC/MRIC countries . It is therefore imperative for greater efforts to increase awareness and knowledge about PR amongst general paediatricians, other doctors (orthopaedic surgeons, adult rheumatologists), nurses and AHPs who may be the first health care professionals to encounter children with potential rheumatic diseases . Such health care professionals need targeted education and training relevant to their clinical context to enable them to make an accurate diagnosis, be involved in patient care and refer to specialists where available. The PR training is currently developing in some SEA countries, namely Singapore, Malaysia, the Philippines and Thailand. Furthermore, there is need to increase awareness in the general population to encourage early presentation to health care through campaigns (e.g World Young Rheumatic Disease (WORD) Day; https://wordday.org ) . The other main perceived barrier to clinical care in SE ASIA/ASIAPAC was affordability, availability and access of medicines (DMARDs and biologics). The variation in availability of biologics in our survey was notable and even in countries regarded as HRIC (e.g Australia and New Zealand). Financial constraints, absence of specialists to supervise the use of biological drugs in clinical practice and drug unavailability are all major barriers to the use of these medicines. The prescribing and monitoring of biological therapies for children with rheumatic diseases is recommended to be under the supervision of specialists ; therefore, a paucity of specialists is likely a major barrier to access such therapies. Limited access and availability of conventional DMARDs, intraarticular corticosteroids and biological drugs have been reported in other LRIC . The important role of the WHO Essential Medicines List (EML) and need to include medicines used in PR care has been highlighted ; revision of the EML is a priority for the PR community to address and if successful, will hopefully improve access to these medicines in many LRIC. Recommendations and guidelines for clinical care are important levers for change but have been mainly developed within HRIC and not transferable to LRIC in the context of other health challenges, poverty and burden of infection . The JAMLess recommendations are the first of their kind for LRIC and most respondents to their surveys were from Africa, Asia and South America. Broadly speaking, our results from SE ASIA/ASIAPAC are similar to those reported in the JAMLess survey; highlighting need for workforce capacity building, greater access to PR training, specialist care and medicines, and targeted educational programmes to raise awareness . The SE ASIA/ASIAPAC region has wide diversity in terms of socioeconomic status, population density, disease burden and health care systems. The JAMLess recommendations are likely broadly applicable to SE ASIA/ASIAPAC but more work is needed to produce contextually relevant clinical guidelines. There are limitations in our study. First, there is likely a selection bias of participants in the survey. Due to the paucity of paediatric rheumatologists in many of the countries surveyed, there were unequal distribution in the responses across some countries. Our survey study was not a population-based study. We sent the link of questionnaires through paediatric networks with potential to reach clinicians involved in clinical care for children with rheumatic diseases. Most responders were from Malaysia, Myanmar and Thailand. It was challenging to involve all hospitals in each country but we believe that as a minimum, tertiary care centres in all the participating countries were represented. Additionally, our survey data found that 223 out of 340 participants responded the question relating to training in PR. The other (117) responders skipped this question. We assumed that the 223 responders had awareness of training in PR in their respective countries and of these, 51 (22.9%) were paediatric rheumatologists. Second, the online survey was in English version and was sent through email and WhatsApp™ so we were unable to ascertain the response rate. Thirdly, areas with dedicated PR care are probably more likely to have responded to this survey and countries without dedicated PR care are inevitably less well represented. To the best of our knowledge, this is the first survey of PR clinical care and training in SE ASIA/ASIAPAC and highlights multiple challenges. The Paediatric Global Musculoskeletal Task Force has recently published a ‘Call to Action’ and increasing public and government awareness is important. Facilitating and leveraging change needs support and action from health authorities, higher education institutions and policy makers. We hope that this survey is the initial step for further collaborative working to address many of these challenges and ultimately improve the quality of care for children with rheumatic diseases in the region. Additional file 1.
Advancing Peri‐Implantitis Treatment: A Scoping Review of Breakthroughs in Implantoplasty and Er:YAG Laser Therapies
0a546470-6c3b-4af8-92c5-5ee476225b30
11898008
Dentistry[mh]
Introduction Peri‐implantitis represents a critical issue in the field of dental implantology, characterized by inflammatory reactions around osseointegrated dental implants that lead to progressive alveolar bone loss. The increased prevalence of peri‐implantitis, concomitant with the rising use of dental implants, highlights the urgent need for effective management strategies to ensure implant longevity and maintain optimal oral health. Since the 2017 consensus by the American Academy of Periodontology (AAP) set foundational guidelines for the diagnosis and management of peri‐implant diseases (Fragkioudakis et al. ), significant advancements have been made in understanding the etiology, progression, and treatment of peri‐implantitis (Müller et al. ; Monje et al. ). This evolution in knowledge underscores the necessity for comprehensive reviews that assimilate the latest evidence and refine clinical protocols. The treatment landscape for peri‐implantitis has broadened considerably, incorporating both traditional approaches and innovative therapeutic modalities. Traditionally, mechanical debridement has served as the cornerstone of peri‐implantitis management, often supplemented with antimicrobial agents to address the biofilm‐mediated etiology of the disease (Shiba et al. ). However, these conventional methods frequently exhibit limitations—especially in advanced cases—thus prompting the exploration of adjunctive therapies (Wang et al. ; Tu et al. ; Norton ). Among the emerging interventions, laser technologies and surgical procedures such as implantoplasty have garnered considerable attention. In particular, the Erbium‐doped Yttrium Aluminum Garnet (Er:YAG) laser has been recognized for its capacity to selectively ablate diseased tissues and biofilms from the implant surface without causing significant collateral damage to the surrounding tissues (Hakki et al. ; Clem and Gunsolley ; Wang et al. ). Simultaneously, implantoplasty—a surgical technique involving the mechanical smoothing of exposed implant surfaces to reduce surface roughness and subsequent bacterial retention—has shown promise in mitigating the progression of peri‐implantitis (Caccianiga et al. ). Despite the encouraging outcomes reported with both Er:YAG laser therapy and implantoplasty, variability in clinical efficacy persists. The success of these treatment modalities appears to be influenced by multiple factors, including the severity of the peri‐implant defect, the patient's systemic health, and the specific technical execution of the procedures (Yamamoto et al. ; Sharonit; Lin et al. ). For example, while several studies have demonstrated significant improvements in clinical parameters such as probing depth (PD), bleeding on probing (BOP), and marginal bone levels (MBL) following these interventions, the heterogeneity in study designs and outcome measurements necessitates a nuanced analysis to identify optimal treatment protocols (Lin et al. ; Scarano et al. ; Świder et al. ). This review aims to critically evaluate current and reputable literature from the past 7 years concerning the efficacy of implantoplasty and Er:YAG laser therapy in the management of peri‐implantitis. Employing a scoping review methodology, the present manuscript focuses on primary outcomes—including reductions in PD, improvements in BOP, and changes in MBL—as well as secondary outcomes such as enhancements in soft tissue health and patient‐reported outcomes. By synthesizing recent evidence, we seek to provide comprehensive, evidence‐based recommendations that will assist clinicians in developing personalized treatment plans based on patient‐specific factors, such as systemic health and lifestyle habits. Ultimately, this review aspires to illuminate the path forward in the evolving field of peri‐implantitis management, serving as a valuable resource for both clinicians and researchers dedicated to optimizing long‐term treatment outcomes. Materials and Methods 2.1 Protocol and Registration This scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses Extension for Scoping Reviews (PRISMA‐ScR) checklist. The review protocol was originally registered with the International Prospective Register of Systematic Reviews (PROSPERO; Registration No. CRD42024532117; Registration Date: 04/03/2024). Note that the protocol was initially designed as a systematic review and was subsequently adapted to a scoping review format. 2.2 Eligibility Criteria The inclusion criteria for this review were defined using the PICOS framework: Population (P): Adults diagnosed with peri‐implantitis, characterized by increased probing depth and bleeding on probing. Intervention (I): Implantoplasty procedures, which involve the mechanical smoothing of implant surfaces to reduce biofilm retention. Comparison (C): Erbium‐doped Yttrium Aluminum Garnet (Er:YAG) laser treatment for implant surface decontamination. Outcomes (O): ○ Primary Outcomes: Reduction in probing depth (PD), improvement in bleeding on probing (BOP), and changes in marginal bone levels (MBL). ○ Secondary Outcomes: Enhancements in soft tissue health, patient‐reported outcomes (e.g., pain, discomfort), and microbial load reduction. Study Design (S): Randomized controlled trials (RCTs), cohort studies, and case‐control studies published in English from January 2018 to the present. Studies were required to have a minimum follow‐up period of 6 months and include at least 10 patients (or 10 implants) per treatment group. 2.3 Search Strategy A comprehensive literature search was performed across multiple electronic databases to identify relevant studies. The following databases were searched: PubMed EMBASE Cochrane Library Web of Science For each database, the search strategy combined Medical Subject Headings (MeSH) and free‐text keywords. An example of the search strategy for PubMed is as follows: (“peri‐implantitis”[MeSH] OR “dental implants”[MeSH]) AND (“implantoplasty” OR “Er:YAG laser”) Similar search terms and Boolean operators (AND, OR) were adapted for EMBASE, the Cochrane Library, and Web of Science. The complete search strategy, including all keywords and Boolean factors used for each database, is provided in the Supporting Information. 2.4 Data Extraction and Quality Assessment Two reviewers independently extracted data from the selected studies using a standardized data extraction form. The form captured the following information: Study design and characteristics. Population details (including the number of participants/implants and follow‐up times). Detailed descriptions of the interventions (implantoplasty and Er:YAG laser therapy) Outcome measures and main findings. Quality assessment of the included studies was performed using: The Cochrane risk‐of‐bias tool (RoB2) for randomized controlled trials. The ROBINS‐I tool for non‐randomized studies. Discrepancies between the reviewers were resolved by discussion or consultation with a third reviewer. 2.5 Data Synthesis and Analysis Given the anticipated heterogeneity in interventions, study designs, and outcome measures, a narrative synthesis was conducted. The analysis was organized by intervention type, and comparative assessments were performed when sufficient data were available. Although a meta‐analysis was considered, the variability in study methodologies precluded quantitative synthesis. Furthermore, the certainty of the evidence was evaluated using a GRADE (Grading of Recommendations, Assessment, Development and Evaluations) approach, with findings summarized in a dedicated GRADE table (see Table ). Note on multiplicity: While multiple primary outcomes were identified, care was taken during analysis to mitigate the risk of Type I error. 2.5.1 Critical Appraisal Quality assessment of the included studies was conducted using the Cochrane risk‐of‐bias tool (RoB2) for randomized controlled trials and the ROBINS‐I tool for non‐randomized studies. Overall, most studies demonstrated robust randomization, clear outcome reporting, and adequate allocation concealment. However, several studies provided limited details on blinding procedures and the handling of missing data, which resulted in a low to moderate risk of bias rating. These observations indicate that, while the overall methodological quality is acceptable, some limitations in study design should be considered when interpreting the results. 2.5.2 Data Charting Data extraction was performed independently by two reviewers using a standardized data extraction form that captured key variables including study design, sample size (number of participants and implants), follow‐up duration, and detailed descriptions of the interventions (implantoplasty or Er:YAG laser therapy). The form also recorded both primary outcomes (e.g., changes in probing depth, bleeding on probing, and marginal bone levels) and secondary outcomes (e.g., improvements in soft tissue health and patient‐reported measures). Any assumptions or necessary conversions—such as categorizing follow‐up durations or standardizing outcome measures—were explicitly documented within the main text to ensure consistency and transparency in the synthesis of the available evidence. Protocol and Registration This scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses Extension for Scoping Reviews (PRISMA‐ScR) checklist. The review protocol was originally registered with the International Prospective Register of Systematic Reviews (PROSPERO; Registration No. CRD42024532117; Registration Date: 04/03/2024). Note that the protocol was initially designed as a systematic review and was subsequently adapted to a scoping review format. Eligibility Criteria The inclusion criteria for this review were defined using the PICOS framework: Population (P): Adults diagnosed with peri‐implantitis, characterized by increased probing depth and bleeding on probing. Intervention (I): Implantoplasty procedures, which involve the mechanical smoothing of implant surfaces to reduce biofilm retention. Comparison (C): Erbium‐doped Yttrium Aluminum Garnet (Er:YAG) laser treatment for implant surface decontamination. Outcomes (O): ○ Primary Outcomes: Reduction in probing depth (PD), improvement in bleeding on probing (BOP), and changes in marginal bone levels (MBL). ○ Secondary Outcomes: Enhancements in soft tissue health, patient‐reported outcomes (e.g., pain, discomfort), and microbial load reduction. Study Design (S): Randomized controlled trials (RCTs), cohort studies, and case‐control studies published in English from January 2018 to the present. Studies were required to have a minimum follow‐up period of 6 months and include at least 10 patients (or 10 implants) per treatment group. Search Strategy A comprehensive literature search was performed across multiple electronic databases to identify relevant studies. The following databases were searched: PubMed EMBASE Cochrane Library Web of Science For each database, the search strategy combined Medical Subject Headings (MeSH) and free‐text keywords. An example of the search strategy for PubMed is as follows: (“peri‐implantitis”[MeSH] OR “dental implants”[MeSH]) AND (“implantoplasty” OR “Er:YAG laser”) Similar search terms and Boolean operators (AND, OR) were adapted for EMBASE, the Cochrane Library, and Web of Science. The complete search strategy, including all keywords and Boolean factors used for each database, is provided in the Supporting Information. Data Extraction and Quality Assessment Two reviewers independently extracted data from the selected studies using a standardized data extraction form. The form captured the following information: Study design and characteristics. Population details (including the number of participants/implants and follow‐up times). Detailed descriptions of the interventions (implantoplasty and Er:YAG laser therapy) Outcome measures and main findings. Quality assessment of the included studies was performed using: The Cochrane risk‐of‐bias tool (RoB2) for randomized controlled trials. The ROBINS‐I tool for non‐randomized studies. Discrepancies between the reviewers were resolved by discussion or consultation with a third reviewer. Data Synthesis and Analysis Given the anticipated heterogeneity in interventions, study designs, and outcome measures, a narrative synthesis was conducted. The analysis was organized by intervention type, and comparative assessments were performed when sufficient data were available. Although a meta‐analysis was considered, the variability in study methodologies precluded quantitative synthesis. Furthermore, the certainty of the evidence was evaluated using a GRADE (Grading of Recommendations, Assessment, Development and Evaluations) approach, with findings summarized in a dedicated GRADE table (see Table ). Note on multiplicity: While multiple primary outcomes were identified, care was taken during analysis to mitigate the risk of Type I error. 2.5.1 Critical Appraisal Quality assessment of the included studies was conducted using the Cochrane risk‐of‐bias tool (RoB2) for randomized controlled trials and the ROBINS‐I tool for non‐randomized studies. Overall, most studies demonstrated robust randomization, clear outcome reporting, and adequate allocation concealment. However, several studies provided limited details on blinding procedures and the handling of missing data, which resulted in a low to moderate risk of bias rating. These observations indicate that, while the overall methodological quality is acceptable, some limitations in study design should be considered when interpreting the results. 2.5.2 Data Charting Data extraction was performed independently by two reviewers using a standardized data extraction form that captured key variables including study design, sample size (number of participants and implants), follow‐up duration, and detailed descriptions of the interventions (implantoplasty or Er:YAG laser therapy). The form also recorded both primary outcomes (e.g., changes in probing depth, bleeding on probing, and marginal bone levels) and secondary outcomes (e.g., improvements in soft tissue health and patient‐reported measures). Any assumptions or necessary conversions—such as categorizing follow‐up durations or standardizing outcome measures—were explicitly documented within the main text to ensure consistency and transparency in the synthesis of the available evidence. Critical Appraisal Quality assessment of the included studies was conducted using the Cochrane risk‐of‐bias tool (RoB2) for randomized controlled trials and the ROBINS‐I tool for non‐randomized studies. Overall, most studies demonstrated robust randomization, clear outcome reporting, and adequate allocation concealment. However, several studies provided limited details on blinding procedures and the handling of missing data, which resulted in a low to moderate risk of bias rating. These observations indicate that, while the overall methodological quality is acceptable, some limitations in study design should be considered when interpreting the results. Data Charting Data extraction was performed independently by two reviewers using a standardized data extraction form that captured key variables including study design, sample size (number of participants and implants), follow‐up duration, and detailed descriptions of the interventions (implantoplasty or Er:YAG laser therapy). The form also recorded both primary outcomes (e.g., changes in probing depth, bleeding on probing, and marginal bone levels) and secondary outcomes (e.g., improvements in soft tissue health and patient‐reported measures). Any assumptions or necessary conversions—such as categorizing follow‐up durations or standardizing outcome measures—were explicitly documented within the main text to ensure consistency and transparency in the synthesis of the available evidence. Results 3.1 Selection of Studies The initial search identified 649 potential articles, culminating in the selection of 24 pivotal studies after rigorous screening. These studies, exclusively Randomized Controlled Trials (RCTs), provide a comprehensive examination of implantoplasty and Er:YAG laser treatments for peri‐implantitis. The diversity in geographical locations and settings of these studies enriches the pool of evidence, offering a broad perspective on treatment efficacy across different patient populations and clinical practices. Studies directly linked to laser usage are listed in Table , while studies directly referencing implantoplasty are presented in Table . 3.2 Detailed Analysis of Treatment Efficacy 3.2.1 Probing Depth Reduction In the realm of implantoplasty, Shiba et al. showcased an average reduction in probing depth of approximately 2.3 mm at a 12‐month follow‐up. This finding was echoed by Martins et al. , who observed a substantial decrease, with an average reduction of 2.1 mm over a 24‐month period, underscoring the sustained efficacy of implantoplasty in reducing probing depths. Similarly, Er:YAG laser treatments demonstrated noteworthy effectiveness in probing depth reduction. Yamamoto et al. reported an average reduction of 3.1 mm in probing depth posttreatment, while Fragkioudakis et al. documented a comparable outcome with a 3.0 mm reduction. These findings emphasize the potent impact of the Er:YAG laser in improving clinical measures associated with peri‐implantitis. 3.2.2 Bleeding on Probing Improvement The analysis revealed that implantoplasty significantly reduced instances of bleeding on probing. Shiba et al. achieved a complete cessation of BOP in all treated sites, marking a profound improvement in peri‐implant health. This effect was mirrored by Martins et al. , who observed a notable reduction in BOP in 95% of the cases over a 24‐month follow‐up period, illustrating the durable impact of the treatment. Similarly, Er:YAG laser therapy was efficacious in addressing BOP; Yamamoto et al. documented a substantial decrease in BOP prevalence from 100% of sites pretreatment to 30% posttreatment, highlighting the laser's significant anti‐inflammatory effects. Fragkioudakis et al. further corroborated these results by reporting improvements in BOP in 90% of treated cases, underscoring the laser's capability to effectively mitigate inflammation and bleeding. 3.2.3 Marginal Bone Level Changes An analysis of marginal bone level changes revealed nuanced effects for both treatment modalities. Martins et al. highlighted that implantoplasty had a modest impact on marginal bone levels, with a slight reduction averaging less than 0.5 mm over a 24‐month period. This suggests that while implantoplasty primarily targets soft tissue health and bacterial load reduction, its effect on bone integrity is minimal, indicating the need for further exploration. In contrast, Er:YAG laser therapy showcased promising potential for bone health; Fragkioudakis et al. reported not only stabilization of marginal bone levels but also instances of bone gain. 3.3 Additional Measurements Additional outcomes reported in the studies included improvements in soft tissue health, enhanced patient‐reported outcomes such as reductions in pain and discomfort, and significant microbial load reduction around the implant site. For instance, several studies investigating Er:YAG laser therapy noted improvements in soft tissue attachment and a reduction in patient‐reported discomfort, suggesting an enhancement in overall peri‐implant health beyond the traditional measurements of probing depth and bleeding. 3.4 Risk of Bias Assessment A thorough risk of bias assessment revealed a predominantly low to moderate risk across the included studies. This consistent methodological quality—characterized by robust randomization processes and clear, objective outcome reporting—reinforces the strength and reliability of the synthesized evidence. 3.5 Heterogeneity and Subgroup Analysis A high degree of heterogeneity was anticipated and observed due to variability in treatment protocols, patient populations, and outcome measurements. Subgroup analyses were conducted to explore potential sources of variability, assessing factors such as baseline disease severity, treatment duration, and follow‐up period. These analyses provided deeper insights into the conditions under which each treatment modality may be most beneficial, although the observed heterogeneity underscores the need for standardized methodologies in future research. Selection of Studies The initial search identified 649 potential articles, culminating in the selection of 24 pivotal studies after rigorous screening. These studies, exclusively Randomized Controlled Trials (RCTs), provide a comprehensive examination of implantoplasty and Er:YAG laser treatments for peri‐implantitis. The diversity in geographical locations and settings of these studies enriches the pool of evidence, offering a broad perspective on treatment efficacy across different patient populations and clinical practices. Studies directly linked to laser usage are listed in Table , while studies directly referencing implantoplasty are presented in Table . Detailed Analysis of Treatment Efficacy 3.2.1 Probing Depth Reduction In the realm of implantoplasty, Shiba et al. showcased an average reduction in probing depth of approximately 2.3 mm at a 12‐month follow‐up. This finding was echoed by Martins et al. , who observed a substantial decrease, with an average reduction of 2.1 mm over a 24‐month period, underscoring the sustained efficacy of implantoplasty in reducing probing depths. Similarly, Er:YAG laser treatments demonstrated noteworthy effectiveness in probing depth reduction. Yamamoto et al. reported an average reduction of 3.1 mm in probing depth posttreatment, while Fragkioudakis et al. documented a comparable outcome with a 3.0 mm reduction. These findings emphasize the potent impact of the Er:YAG laser in improving clinical measures associated with peri‐implantitis. 3.2.2 Bleeding on Probing Improvement The analysis revealed that implantoplasty significantly reduced instances of bleeding on probing. Shiba et al. achieved a complete cessation of BOP in all treated sites, marking a profound improvement in peri‐implant health. This effect was mirrored by Martins et al. , who observed a notable reduction in BOP in 95% of the cases over a 24‐month follow‐up period, illustrating the durable impact of the treatment. Similarly, Er:YAG laser therapy was efficacious in addressing BOP; Yamamoto et al. documented a substantial decrease in BOP prevalence from 100% of sites pretreatment to 30% posttreatment, highlighting the laser's significant anti‐inflammatory effects. Fragkioudakis et al. further corroborated these results by reporting improvements in BOP in 90% of treated cases, underscoring the laser's capability to effectively mitigate inflammation and bleeding. 3.2.3 Marginal Bone Level Changes An analysis of marginal bone level changes revealed nuanced effects for both treatment modalities. Martins et al. highlighted that implantoplasty had a modest impact on marginal bone levels, with a slight reduction averaging less than 0.5 mm over a 24‐month period. This suggests that while implantoplasty primarily targets soft tissue health and bacterial load reduction, its effect on bone integrity is minimal, indicating the need for further exploration. In contrast, Er:YAG laser therapy showcased promising potential for bone health; Fragkioudakis et al. reported not only stabilization of marginal bone levels but also instances of bone gain. Probing Depth Reduction In the realm of implantoplasty, Shiba et al. showcased an average reduction in probing depth of approximately 2.3 mm at a 12‐month follow‐up. This finding was echoed by Martins et al. , who observed a substantial decrease, with an average reduction of 2.1 mm over a 24‐month period, underscoring the sustained efficacy of implantoplasty in reducing probing depths. Similarly, Er:YAG laser treatments demonstrated noteworthy effectiveness in probing depth reduction. Yamamoto et al. reported an average reduction of 3.1 mm in probing depth posttreatment, while Fragkioudakis et al. documented a comparable outcome with a 3.0 mm reduction. These findings emphasize the potent impact of the Er:YAG laser in improving clinical measures associated with peri‐implantitis. Bleeding on Probing Improvement The analysis revealed that implantoplasty significantly reduced instances of bleeding on probing. Shiba et al. achieved a complete cessation of BOP in all treated sites, marking a profound improvement in peri‐implant health. This effect was mirrored by Martins et al. , who observed a notable reduction in BOP in 95% of the cases over a 24‐month follow‐up period, illustrating the durable impact of the treatment. Similarly, Er:YAG laser therapy was efficacious in addressing BOP; Yamamoto et al. documented a substantial decrease in BOP prevalence from 100% of sites pretreatment to 30% posttreatment, highlighting the laser's significant anti‐inflammatory effects. Fragkioudakis et al. further corroborated these results by reporting improvements in BOP in 90% of treated cases, underscoring the laser's capability to effectively mitigate inflammation and bleeding. Marginal Bone Level Changes An analysis of marginal bone level changes revealed nuanced effects for both treatment modalities. Martins et al. highlighted that implantoplasty had a modest impact on marginal bone levels, with a slight reduction averaging less than 0.5 mm over a 24‐month period. This suggests that while implantoplasty primarily targets soft tissue health and bacterial load reduction, its effect on bone integrity is minimal, indicating the need for further exploration. In contrast, Er:YAG laser therapy showcased promising potential for bone health; Fragkioudakis et al. reported not only stabilization of marginal bone levels but also instances of bone gain. Additional Measurements Additional outcomes reported in the studies included improvements in soft tissue health, enhanced patient‐reported outcomes such as reductions in pain and discomfort, and significant microbial load reduction around the implant site. For instance, several studies investigating Er:YAG laser therapy noted improvements in soft tissue attachment and a reduction in patient‐reported discomfort, suggesting an enhancement in overall peri‐implant health beyond the traditional measurements of probing depth and bleeding. Risk of Bias Assessment A thorough risk of bias assessment revealed a predominantly low to moderate risk across the included studies. This consistent methodological quality—characterized by robust randomization processes and clear, objective outcome reporting—reinforces the strength and reliability of the synthesized evidence. Heterogeneity and Subgroup Analysis A high degree of heterogeneity was anticipated and observed due to variability in treatment protocols, patient populations, and outcome measurements. Subgroup analyses were conducted to explore potential sources of variability, assessing factors such as baseline disease severity, treatment duration, and follow‐up period. These analyses provided deeper insights into the conditions under which each treatment modality may be most beneficial, although the observed heterogeneity underscores the need for standardized methodologies in future research. Discussion This review provides a comprehensive evaluation of implantoplasty and Er:YAG laser treatments for peri‐implantitis, demonstrating that both modalities yield significant improvements in clinical parameters such as probing depth reduction and bleeding on probing. Our results indicate that implantoplasty, by mechanically smoothing implant surfaces, significantly reduces probing depths and inflammatory signs, as evidenced by studies such as Shiba et al. (average reduction of approximately 2.3 mm at 12 months) and Martins et al. . (average reduction of 2.1 mm over 24 months). However, its modest effect on marginal bone levels (mean changes of less than 0.5 mm) suggests that while effective in improving soft tissue health and reducing bacterial load, implantoplasty may have a limited impact on preserving or enhancing bone integrity. In contrast, Er:YAG laser therapy demonstrated robust decontamination effects, with Yamamoto et al. and Fragkioudakis et al. reporting probing depth reductions of approximately 3.1 mm and 3.0 mm, respectively. Moreover, the laser's potential to promote bone stability—or even bone gain—positions it as a promising minimally invasive approach for both soft and hard tissue management. These differential effects underscore the need for a tailored treatment approach based on the specific clinical scenario. The significance of these findings is further reinforced when compared with recent narrative and systematic reviews. Herrera et al. emphasize that an evidence‐based, multidisciplinary approach is essential for the prevention and treatment of peri‐implant diseases, advocating for the integration of adjunctive therapies with conventional mechanical debridement to achieve long‐term stability. Similarly, Schwarz et al. reported that while nonsurgical treatments may improve soft tissue parameters, combining surgical interventions—including both resective and regenerative techniques—is necessary for sustained improvements, especially in cases of advanced bone loss. Roccuzzo et al. highlighted that a history of periodontitis is a critical prognostic factor that negatively influences treatment outcomes, corroborating our observation that patient‐specific factors must be considered when selecting a treatment modality; patients with a history of periodontitis may require more aggressive or combined therapies to achieve optimal results. Reviews by Atieh et al. and C. Y. Lin et al. further support the role of laser technologies by demonstrating that lasers offer a more precise means of decontaminating the implant surface while minimizing collateral tissue damage—a conclusion that is consistent with our findings of significant probing depth reductions following Er:YAG laser therapy. Moreover, Cheng et al. have shown that regenerative surgical protocols can achieve bone gain, although long‐term data remain limited and outcomes are heterogeneous. Finally, Schwarz et al. reiterated that overcoming the multifactorial challenges of peri‐implantitis treatment often requires combining mechanical debridement with adjunctive chemical, laser‐based, or regenerative therapies. Collectively, these comparisons emphasize that while our findings confirm the efficacy of both implantoplasty and Er:YAG laser therapy, the optimal management of peri‐implantitis will likely involve a tailored, multifaceted approach. Treatment must be individualized by taking into account the severity of the disease, the patient's history (especially regarding previous periodontitis), and the specific advantages and limitations of each modality. Furthermore, our review—like those of Herrera et al. , Schwarz et al. , Roccuzzo et al. , Atieh et al. , C. Y. Lin et al. , Cheng et al. , and Schwarz et al. —consistently underscores the importance of long‐term maintenance and risk factor modification. Regular supportive care is critical to sustain the benefits of the initial intervention and to prevent disease recurrence. 4.1 Limitations This scoping review has several limitations. First, the included studies exhibited considerable heterogeneity in terms of study design, population characteristics, intervention protocols, and outcome measures, which limited the ability to perform a quantitative synthesis. Second, some studies lacked comprehensive reporting on critical variables, which may affect the generalizability of the findings. Third, while a broad range of databases was searched, there remains the possibility of publication bias. Finally, the review did not directly compare Er:YAG laser therapy with implantoplasty but rather summarized evidence on each intervention separately. These factors should be taken into account when interpreting the results and formulating clinical recommendations. Limitations This scoping review has several limitations. First, the included studies exhibited considerable heterogeneity in terms of study design, population characteristics, intervention protocols, and outcome measures, which limited the ability to perform a quantitative synthesis. Second, some studies lacked comprehensive reporting on critical variables, which may affect the generalizability of the findings. Third, while a broad range of databases was searched, there remains the possibility of publication bias. Finally, the review did not directly compare Er:YAG laser therapy with implantoplasty but rather summarized evidence on each intervention separately. These factors should be taken into account when interpreting the results and formulating clinical recommendations. Conclusion In conclusion, both implantoplasty and Er:YAG laser treatments emerge as effective modalities in the management of peri‐implantitis, with each technique offering unique advantages. Implantoplasty effectively reduces surface roughness and bacterial load, leading to significant improvements in probing depths and bleeding on probing. In contrast, Er:YAG laser therapy not only provides precise decontamination but also shows promising potential for stabilizing or even improving marginal bone levels. These complementary effects underscore the need for a tailored, patient‐specific treatment approach that considers disease severity, individual risk factors (such as a history of periodontitis), and the particular strengths of each modality. Our findings align with recent narrative and systematic reviews by Herrera et al. , Schwarz et al. , Roccuzzo et al. , Atieh et al. , T. Lin et al. , and Schwarz et al. , all of which advocate for a multifactorial, individualized treatment strategy. Looking ahead, future research should focus on direct comparative studies and the exploration of potential synergistic effects when combining these treatments. Equally important is the implementation of regular supportive maintenance protocols and risk factor modification strategies to ensure long‐term clinical stability and successful treatment outcomes. Sean Mojaver contributed to the conception, design, data collection, analysis of the literature, and drafting of the manuscript. Joseph Fiorellini provided critical revisions, supervised the review process, and assisted with editing. Hector Sarmiento assisted with the interpretation of findings, conducted quality assessments, and contributed to the final review and editing of the manuscript. All authors have read and approved the final version of the manuscript and agree to be accountable for all aspects of the work, ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The authors declare no conflicts of interest. Supporting information.
Brainstem clinical and neurophysiological involvement in COVID-19
7648b2ae-1c4a-49ca-ac69-a8d4cf85b697
7969346
Physiology[mh]
Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study
eaee4e4b-8bfc-4bcc-840f-c7b0d7cf25cf
11863222
Surgical Procedures, Operative[mh]
Chronic, persistent hypotension is a common complication in hemodialysis (HD) patients. It occurs in approximately 5% of the patients who have been undergoing HD for multiple years. This condition is considered a relative contraindication for arteriovenous fistula (AVF) creation, as it impairs AVF maturation and blood perfusion. Hypotension is also described as a risk predictor of thrombosis in AVF and adversely affect AVF survival. , Therefore, catheters are recommended for patients with hypotension who attempt to create or maintain a permanent access. However, long-term use of tunneled-cuffed catheters (TCCs) is often associated with thrombosis and fibrin sheath formation. Catheter-related thrombosis can result in inadequate and irregular blood flow rates, leading to catheter malfunction, ineffective HD, concomitant infection, and pulmonary embolism (PE). , Moreover, recurrent thrombosis in the central venous further restricts alternative vascular access, which is essential for the survival of HD patients. Therefore, preserving the TCCs in these patients when catheter-related thrombosis happens is critical. While HD patients with hypotension usually have a relatively short life expectancy. Hypotension restricts HD patients’ rehabilitation and limits the amount of ultrafiltration, which may further reduce the blood pressure early in the dialysis run. Additionally, contributing factors such as heart disease and impaired left ventricular reserve further increase the risk of mortality in these patients. In this case, prolonging the HD vintage and improving HD patients’ life quality remains a crucial challenge in the vascular access field. Thus we investigated strategies for managing TCCs in HD patients with hypotension, especially when recurrent thrombi happened in the central venous. Study population The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013) and approved by the Ethics Committee on Biomedical Research at the West China Hospital of Sichuan University (No. 201885). Individual consent for this retrospective analysis was waived. All patient details have been de-identified to ensure that they cannot be identified in any way. The study conforms to the Strengthening the Reporting of Observational Studies in Epidemiology guidelines. This retrospective study collected data from HD patients with chronic persistent hypotension from West China Hospital from October 2010 to April 2018. Hypotension was defined as a systolic blood pressure below 100 mmHg, recorded on three consecutive nondialysis days, and sustained for at least one month. Inclusion criteria: a) Age ≥ 18 years; b) Patients diagnosed with end-stage renal disease (ESRD) undergoing maintenance HD and experiencing persistent hypotension; c) Use of TCCs as the vascular access for HD. The following exclusion criteria were applied: a) Severe organic heart disease; b) Serious bleeding conditions or tendencies; c) Disorders of consciousness, including altered levels of consciousness such as lethargy, confusion, delirium, stupor, or coma; d) Active malignant tumors; and e) Incomplete data or loss to follow-up. Data were collected on each patient to identify causes of ESRD, indications of blood pressure, duration of HD, and duration of catheter use. Procedures All procedures were performed by interventional nephrologists under local anesthesia. Preoperative chest computed tomography angiography (CTA) or echocardiography was utilized to assess the thrombus and TCCs tips. Intraoperative digital subtraction angiography (DSA) was conducted to confirm the position of TCCs and to adjust the catheter tip position in patients with catheter dysfunction. The procedure adopted in the current study was as follows: Initially, the catheter tip was routinely positioned in the superior vena cava (SVC). In cases of SVC thrombosis, stenosis, or obstruction, the catheter tips were adjusted to the right atrium (RA) or the SVC and RA junction. Catheter tips were then returned to the SVC for patients who experienced complete dissolution of SVC thrombi but developed RA thrombosis or had insufficient blood flow. In patients with only partial dissolution of thrombi of SVC, TCCs tips were replaced in inferior vena cava (IVC) after recanalization of SVC. For patients with IVC thrombosis or obstruction caused by repeated irritation from TCCs tips, interventional surgery was performed to relocate the TCCs tips to the lower section of the obstructed area. As mentioned above, when thrombi in the RA were completely resolved, the catheter tips could be repositioned to the RA or SVC and RA junction . Immediately after TCCs insertion and the completion of the HD session, both ports were locked with unfractionated heparin solution. Lifelong antiplatelet therapy was prescribed to all patients to ensure catheter patency for ongoing HD. Patients continued regular HD sessions were monitored through follow-up echocardiography or CTA. Outcomes and statistical analysis The primary outcomes were the incidence of fatal PE and catheter complication–related deaths; the secondary outcomes were catheter patency. Statistical analyses were performed using Stata (version 17.0). The Shapiro–Wilk test assessed the normality of continuous variables. Data with a normal distribution were expressed as mean ± standard deviation, while non-normally distributed data were presented as median and interquartile range. The patency rates were assessed using the Kaplan–Meier method. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013) and approved by the Ethics Committee on Biomedical Research at the West China Hospital of Sichuan University (No. 201885). Individual consent for this retrospective analysis was waived. All patient details have been de-identified to ensure that they cannot be identified in any way. The study conforms to the Strengthening the Reporting of Observational Studies in Epidemiology guidelines. This retrospective study collected data from HD patients with chronic persistent hypotension from West China Hospital from October 2010 to April 2018. Hypotension was defined as a systolic blood pressure below 100 mmHg, recorded on three consecutive nondialysis days, and sustained for at least one month. Inclusion criteria: a) Age ≥ 18 years; b) Patients diagnosed with end-stage renal disease (ESRD) undergoing maintenance HD and experiencing persistent hypotension; c) Use of TCCs as the vascular access for HD. The following exclusion criteria were applied: a) Severe organic heart disease; b) Serious bleeding conditions or tendencies; c) Disorders of consciousness, including altered levels of consciousness such as lethargy, confusion, delirium, stupor, or coma; d) Active malignant tumors; and e) Incomplete data or loss to follow-up. Data were collected on each patient to identify causes of ESRD, indications of blood pressure, duration of HD, and duration of catheter use. All procedures were performed by interventional nephrologists under local anesthesia. Preoperative chest computed tomography angiography (CTA) or echocardiography was utilized to assess the thrombus and TCCs tips. Intraoperative digital subtraction angiography (DSA) was conducted to confirm the position of TCCs and to adjust the catheter tip position in patients with catheter dysfunction. The procedure adopted in the current study was as follows: Initially, the catheter tip was routinely positioned in the superior vena cava (SVC). In cases of SVC thrombosis, stenosis, or obstruction, the catheter tips were adjusted to the right atrium (RA) or the SVC and RA junction. Catheter tips were then returned to the SVC for patients who experienced complete dissolution of SVC thrombi but developed RA thrombosis or had insufficient blood flow. In patients with only partial dissolution of thrombi of SVC, TCCs tips were replaced in inferior vena cava (IVC) after recanalization of SVC. For patients with IVC thrombosis or obstruction caused by repeated irritation from TCCs tips, interventional surgery was performed to relocate the TCCs tips to the lower section of the obstructed area. As mentioned above, when thrombi in the RA were completely resolved, the catheter tips could be repositioned to the RA or SVC and RA junction . Immediately after TCCs insertion and the completion of the HD session, both ports were locked with unfractionated heparin solution. Lifelong antiplatelet therapy was prescribed to all patients to ensure catheter patency for ongoing HD. Patients continued regular HD sessions were monitored through follow-up echocardiography or CTA. The primary outcomes were the incidence of fatal PE and catheter complication–related deaths; the secondary outcomes were catheter patency. Statistical analyses were performed using Stata (version 17.0). The Shapiro–Wilk test assessed the normality of continuous variables. Data with a normal distribution were expressed as mean ± standard deviation, while non-normally distributed data were presented as median and interquartile range. The patency rates were assessed using the Kaplan–Meier method. Patient characteristics A total of 21 patients were included in this study, with 8 (38.0%) being male. The mean age was 64.52 ± 12.91 years, and the median duration of HD was 84 months. The median duration of hypotension was 2 months, and the mean systolic blood pressure was 88.38 ± 7.88 mmHg ( and Supplementary Table 1 ). All patients underwent right central venous puncture, with the catheter tips initially positioned in SVC. Among them, 19 patients (90.5%) had occlusion of the right brachiocephalic vein and the proximal segment of the right internal jugular vein after prolonged use of TCCs. Additionally, all patients exhibited SVC-related complications, including thrombosis, fibrin sheath formation, severe stenosis, or occlusion caused by recurrent thrombi ( and Supplementary Table 2 ). One patient also demonstrated an occlusion at the SVC and RA junction, as shown in . Treatment Tunneled-cuffed catheters were replaced over a guidewire under DSA by an experienced nephroradiologist in all patients following the detection of diseased SVC. In patients with right internal jugular vein entry site catheters, TCCs were replaced in situ . The right brachiocephalic vein was chosen as the puncture site to allow for convenient revascularization of SVC for patients with SVC occlusion or obstruction and without a diseased right brachiocephalic vein. When the right brachiocephalic vein and SVC were occluded, the SVC stump was used as the puncture site, and the recanalization of SVC by balloon angioplasty (Cordis Corporation, Milpitas, Calif) was performed. The procedure of the adjustment of catheter tips was followed as described in “Methods” section. The location of the new catheter tips was confirmed by DAS. Follow-up The median follow-up time was 6 years (60–96 months). The outflow of the catheter and oxygen saturation of blood was recorded. None of the patients were found to have reduced blood oxygen saturation. Patients with a reduced outflow of the catheters were readmitted to the hospital. The adjustment of the catheter tip for each patient is listed in and Supplementary Table 2 . Nineteen (90.4%) patients suffered from thrombus/fibrin sheath formation in SVC or insufficient blood flow, and catheters were replaced in RA or SVC and RA junction. Among them, two patients (patient #2 and #7) had insufficient blood flow, and one patient (patient #13) had the end of the catheter adhere to the RA wall. Catheters were exchanged in situ and remained in RA in the abovementioned three patients. In one patient (patient #5) who had insufficient blood flow with the catheter tip in RA and SVC thrombus dissolved, the catheter was repositioned back to SVC. Five patients had SVC or RA lesion, and recanalization of SVC or RA was conducted, and catheter tips were finally adjusted in IVC . Among them, one patient (patient #1) failed to maintain sufficient blood flow in the IVC, necessitating the catheter tip to be returned to the RA. However, thrombi formed in RA, thus the catheter was finally adjusted back to IVC due to IVC thrombus’ dissolution. When the tips of TCCs were adjusted to the RA following pathological changes in the SVC, all patients experienced reduced outflow from the arterial port of the catheter. This was attributed to their shorter anatomical stature, resulting in relatively smaller RA volumes. The arterial and venous lines of the catheter were inversed during the dialysis session. All patients were on regular dialysis without thrombolysis or thrombectomy. Nineteen deaths were observed during the follow-up period . The minimal survival period after procedure was 60 months. Neither fatal PE nor catheter complication–related deaths occurred during the follow-up period. The catheter primary patency rates at 3, 6, and 12 months were 90.5%, 66.7%, and 38.1%, respectively. The secondary patency rates were 100.0%, 80.9%, and 57.1% at 3, 6, and 12 months, respectively . A total of 21 patients were included in this study, with 8 (38.0%) being male. The mean age was 64.52 ± 12.91 years, and the median duration of HD was 84 months. The median duration of hypotension was 2 months, and the mean systolic blood pressure was 88.38 ± 7.88 mmHg ( and Supplementary Table 1 ). All patients underwent right central venous puncture, with the catheter tips initially positioned in SVC. Among them, 19 patients (90.5%) had occlusion of the right brachiocephalic vein and the proximal segment of the right internal jugular vein after prolonged use of TCCs. Additionally, all patients exhibited SVC-related complications, including thrombosis, fibrin sheath formation, severe stenosis, or occlusion caused by recurrent thrombi ( and Supplementary Table 2 ). One patient also demonstrated an occlusion at the SVC and RA junction, as shown in . Tunneled-cuffed catheters were replaced over a guidewire under DSA by an experienced nephroradiologist in all patients following the detection of diseased SVC. In patients with right internal jugular vein entry site catheters, TCCs were replaced in situ . The right brachiocephalic vein was chosen as the puncture site to allow for convenient revascularization of SVC for patients with SVC occlusion or obstruction and without a diseased right brachiocephalic vein. When the right brachiocephalic vein and SVC were occluded, the SVC stump was used as the puncture site, and the recanalization of SVC by balloon angioplasty (Cordis Corporation, Milpitas, Calif) was performed. The procedure of the adjustment of catheter tips was followed as described in “Methods” section. The location of the new catheter tips was confirmed by DAS. The median follow-up time was 6 years (60–96 months). The outflow of the catheter and oxygen saturation of blood was recorded. None of the patients were found to have reduced blood oxygen saturation. Patients with a reduced outflow of the catheters were readmitted to the hospital. The adjustment of the catheter tip for each patient is listed in and Supplementary Table 2 . Nineteen (90.4%) patients suffered from thrombus/fibrin sheath formation in SVC or insufficient blood flow, and catheters were replaced in RA or SVC and RA junction. Among them, two patients (patient #2 and #7) had insufficient blood flow, and one patient (patient #13) had the end of the catheter adhere to the RA wall. Catheters were exchanged in situ and remained in RA in the abovementioned three patients. In one patient (patient #5) who had insufficient blood flow with the catheter tip in RA and SVC thrombus dissolved, the catheter was repositioned back to SVC. Five patients had SVC or RA lesion, and recanalization of SVC or RA was conducted, and catheter tips were finally adjusted in IVC . Among them, one patient (patient #1) failed to maintain sufficient blood flow in the IVC, necessitating the catheter tip to be returned to the RA. However, thrombi formed in RA, thus the catheter was finally adjusted back to IVC due to IVC thrombus’ dissolution. When the tips of TCCs were adjusted to the RA following pathological changes in the SVC, all patients experienced reduced outflow from the arterial port of the catheter. This was attributed to their shorter anatomical stature, resulting in relatively smaller RA volumes. The arterial and venous lines of the catheter were inversed during the dialysis session. All patients were on regular dialysis without thrombolysis or thrombectomy. Nineteen deaths were observed during the follow-up period . The minimal survival period after procedure was 60 months. Neither fatal PE nor catheter complication–related deaths occurred during the follow-up period. The catheter primary patency rates at 3, 6, and 12 months were 90.5%, 66.7%, and 38.1%, respectively. The secondary patency rates were 100.0%, 80.9%, and 57.1% at 3, 6, and 12 months, respectively . The management of TCCs in HD patients with hypotension in the condition of repeated thrombosis in central venous is critical to explore. Therefore, the current article gives a proof of concept on successful maintenance of HD by adjusting the TCCs tips following this process: Placing the tips of TCCs originally in SVC, followed by SVC and RA junction, RA or IVC when thrombosis, stenosis, or obstruction in the above central venous. When the thrombi completely dissolved, catheter tips can be returned to the former location. Recanalization of the occluded vein by balloon angioplasty was performed where required for smoother insertion of the new TCC. Vascular access was preserved in these patients who had very limited alternative access sites. All the patients maintained satisfactory blood flow. No fatal PE or catheter-related deaths occurred, and catheter patency rates were well-maintained during the follow-up period. This article is perhaps among the first to report the management of TCCs and clinical outcomes of HD patients with hypotension. To date, the optimal location for the long-term TCCs tip remains controversial. , , The placement of catheter tips in the RA can achieve a higher blood flow. In contrast, placing catheter tips in the SVC is associated with a lower incidence of RA thrombosis, thus recommending to avoid potential cardiac-related complications. , Regardless of catheter tip location, thrombosis still remains the most common complication in the long-term use of TCCs and occurs in access loss in 30–40% of patients. , The potential pathogenesis of catheter-related thrombosis involves mechanical irritation of the vessel wall caused by the catheter tip or the high-flow rates facilitated by the catheter. This irritation can result in endothelial damage, triggering coagulation, platelet aggregation, and subsequent thrombus formation. Adjusting catheter tip positions during catheter exchanges may help reduce irritation to the vessel wall, thereby minimizing thrombus expansion and preventing further complications. Drawing from our previous experience, catheter-related RA thrombi can often be resolved following catheter exchange and tip adjustment, combined with oral anticoagulation or antiplatelet therapy. In the present study, we prioritized reserving vascular access when thrombi recurred in the central venous. Thus, the position of the catheter tip was adjusted successively rather than removing catheters. Additionally, while the catheter patency rate was acceptable, it was slightly lower than previously reported, , possibly due to the inclusion of patients with persistent hypotension. Regarding the catheter tip adjustment, we do not recommend following the strict “SVC-RA-IVC” strategy. Relatively, we suggest that the catheter tip could be replaced back to SVC when RA lesion occurs and thrombus dissolves in SVC; likewise, back to RA when IVC lesion occurs and RA thrombus dissolves. Similarly, we do not focus on how to choose the puncture site. Wherever the puncture site is selected, it is expected to face the problem of thrombosis with the extension of TCCs. Thrombosis and vascular occlusion may happen in SVC, RA, and SVC extended into RA, and even in IVC. Thus, in our strategy, the right internal jugular was routinely chosen as the puncture site, and the tip of TCCs was preferentially placed in SVC, RA, or SVC and RA junction. Both the puncture entry site and the TCCs tips’ location were adjusted according to the condition of central venous. The study has some limitations. First, the limited sample size, retrospective design, and absence of a control group highlight the need for larger-scale prospective randomized controlled trials to further validate the efficacy of the proposed strategy. Second, during the interventional procedure, there may be unpredictability, and discrepancies between preoperative ultrasound/CTA findings and actual vasculopathy severity can arise. Therefore, the selection of the puncture site and TCC tip location is based on the specific situation. The experience of the nephroradiologist and timely decision-making is critical. This strategy based on our 8 years of experience still requires optimization in certain circumstances. To preserve the limited vascular access and address the repeated central venous thrombosis in HD patients with hypotension, it may be effective to place the tips of TCCs originally in SVC, followed by SVC and RA junction or RA, IVC in the event of thrombosis, stenosis, or obstruction. When thrombi are completely dissolved, catheter tips can be returned to the former location. Additionally, patients with central venous thrombus should be treated with antiplatelet therapy. These findings require validation through future prospective studies. sj-docx-1-sci-10.1177_00368504251323761 - Supplemental material for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study Supplemental material, sj-docx-1-sci-10.1177_00368504251323761 for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study by Jibo Sun, Hong Fan and Tianlei Cui in Science Progress sj-docx-2-sci-10.1177_00368504251323761 - Supplemental material for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study Supplemental material, sj-docx-2-sci-10.1177_00368504251323761 for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study by Jibo Sun, Hong Fan and Tianlei Cui in Science Progress sj-docx-3-sci-10.1177_00368504251323761 - Supplemental material for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study Supplemental material, sj-docx-3-sci-10.1177_00368504251323761 for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study by Jibo Sun, Hong Fan and Tianlei Cui in Science Progress sj-pdf-4-sci-10.1177_00368504251323761 - Supplemental material for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study Supplemental material, sj-pdf-4-sci-10.1177_00368504251323761 for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study by Jibo Sun, Hong Fan and Tianlei Cui in Science Progress sj-pdf-5-sci-10.1177_00368504251323761 - Supplemental material for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study Supplemental material, sj-pdf-5-sci-10.1177_00368504251323761 for Management of tunneled-cuffed catheters in hemodialysis patients with hypotension and recurrent central venous thrombosis: A single-center retrospective cohort study by Jibo Sun, Hong Fan and Tianlei Cui in Science Progress
Benchmarking single-cell cross-omics imputation methods for surface protein expression
a434d8ba-4f49-44e7-ab6f-0f63ed8329d0
11881419
Biochemistry[mh]
Recent advances in single-cell multimodal omics (scMulti-omics) sequencing have revolutionized our ability to simultaneously profile multiple molecular layers within individual cells, offering comprehensive insights into cellular functions and heterogeneity . Protocols such as cellular indexing of transcriptomes and epitopes (CITE-seq) and RNA expression and protein sequencing assay (REAP-seq) enable the concurrent quantification of transcriptomes and surface proteomes within the same cell, effectively bridging the gap between gene expression and protein functionality . These integrated approaches have the potential to reveal cellular diversity that single-cell RNA sequencing (scRNA-seq) alone might overlook . While CITE-seq and REAP-seq represent groundbreaking technologies with immense potential, their prohibitive costs and intricate technical requirements, compared to scRNA-seq, present obstacles to the widespread generation of large-scale public datasets essential for unraveling the complexities of diverse tissues . Given that genes are the blueprints for protein synthesis and that a correlation exists between transcriptomes and proteomes , a promising solution is to leverage large reference datasets to learn the relationship between RNA and proteins. This relationship can then be used to predict protein abundances in cells measured only by scRNA-seq. Several recent studies have explored this possibility, leading to the development of various surface protein data imputation methods. These imputation methods utilize datasets generated by CITE-seq or REAP-seq, which include both surface protein and gene expression data, as training data to develop machine learning models. These models are then used to predict surface protein expression in cells measured by scRNA-seq alone (test data). The imputation methods can be broadly categorized into three types: traditional machine learning-based methods and two types of deep learning-based methods. The first type of methods, including Seurat v3 (CCA) , Seurat v3 (PCA) , Seurat v4 (CCA) , and Seurat v4 (PCA) , first identify mutual nearest neighbors between training and test datasets in a shared low-dimensional space and then transfer surface protein data from the training dataset to the test based on the identified mutual nearest neighbors. The other two types are both based on deep learning, differing in their network structures. The first type, including cTP-net , sciPENN , scMOG , and scMoGNN , employs deep neural networks to directly learn a mapping between transcriptomic and proteomic data from the training dataset, which is then used to make imputations for the test dataset. The second type, including TotalVI , Babel , moETM , and scMM , is based on an encoder-decoder framework. These methods first use an encoder to embed both transcriptomic and proteomic data into a joint latent representation, and then use a decoder to make predictions for the proteomic data. Although these methods have demonstrated good performance in various scenarios, predicting protein expression from gene expression data remains challenging due to post-transcriptional and post-translational modifications, as well as differences in protein stability and localization . Therefore, a comprehensive evaluation of these methods in practical applications is essential. In this study, we present an extensive benchmark of twelve state-of-the-art imputation methods using eleven CITE-seq and REAP-seq datasets across six distinct benchmark scenarios. We employ various accuracy measures to quantitatively evaluate the predicted values at both the protein and cell levels. Additionally, we assess the methods’ sensitivity to training data size, robustness across experiments, and efficiency in terms of time and memory. We also assess their popularity based on the number of stars on their official GitHub repositories and evaluate their user-friendliness in terms of installation, code, and tutorial. Our findings indicate that Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), demonstrate superior accuracy and robustness across diverse experiments, showing relative insensitivity to training data size. They are also highly efficient in terms of memory usage, widely popular with numerous stars on their GitHub repository, and provide high-quality installation guides, codes, and tutorials. However, they exhibit longer running times compared to some deep learning-based methods, which highlights scalability concerns and underscores the necessity for future enhancements to manage larger datasets effectively. Additionally, we offer a decision-tree-style guidance scheme that intuitively presents the recommended methods for specific scenarios based on benchmark evaluation results, facilitating more efficient selection of the most appropriate methods. Overview of the benchmark scheme The overall pipeline of this benchmark study is illustrated in Fig. . In each experiment, we use one CITE-seq or REAP-seq dataset containing paired transcriptomic and proteomic data as the training data. For the test data, we mask the proteomic data from another CITE-seq or REAP-seq dataset, retaining only the transcriptomic data to simulate scRNA-seq data, and then use various imputation methods to predict the corresponding proteomic data (Fig. a). To comprehensively evaluate the performance of these imputation methods, our benchmark includes twelve state-of-the-art methods: four Seurat-based methods (Seurat v3 (CCA), Seurat v3 (PCA), Seurat v4 (CCA), and Seurat v4 (PCA)), cTP-net, sciPENN, scMOG, scMoGNN, TotalVI, Babel, moETM, and scMM. These methods are categorized based on their imputation strategies (Fig. b): imputing by mutual nearest neighbors, imputing by learning a mapping between transcriptomic and proteomic data using deep learning, and imputing by learning a joint latent representation using an encoder-decoder framework. To test the generalizability and robustness of these imputation methods, we use eleven datasets and conduct experiments under six distinct benchmark scenarios (Fig. b and Additional file 1: Tables S1, S2): (1) Random holdout: A dataset is randomly divided into training and test sets to address the case without technical or biological differences; (2) Different training data sizes: Evaluating performance with varying training data sizes to understand how training data size influences each method; (3) Different samples: Considering the scenario where the training and test datasets come from different samples; (4) Different tissues: Testing each method’s generalizability when predicting protein expression for cells from tissues different from those used in the training set; (5) Different clinical states: Assessing each method’s ability to transfer between datasets with biological variations; (6) Different protocols: Investigating performance when training and test datasets are derived from different sequencing protocols. After generating imputation values using different methods (Fig. c), we design a comprehensive framework to evaluate their performance (Fig. d). First, we evaluate the accuracy of methods using Pearson correlation coefficient (PCC) and root mean square error (RMSE). To provide an overall performance metric, we also introduce an average rank score (ARS) that combines the rank score values of methods based on PCC and RMSE. A higher ARS value indicates better accuracy performance across all metrics in the experiment. Second, we assess how the methods’ accuracy performance changes with varying training data sizes by running the methods on training sets of different sample sizes. This analysis helps to understand how the amount of training data influences the methods’ accuracy performance. Third, we evaluate the robustness of methods across experiments by introducing a robustness composite score (RCS). This metric considers both the mean and standard deviation of the ARS values across different experiments. We primarily evaluate experiments demonstrating technical and biological differences that closely resemble those conducted in real-world applications. These experiments stem from scenarios involving different samples, tissues, clinical states, and protocols. A high RCS value indicates that a method not only performs well on average but also maintains consistent performance across all experiments with technical and biological differences. Accurate protein abundances across cells are crucial for tasks such as differential expression analysis and omics feature correlation analysis, while accurate protein abundances in individual cells are essential for tasks like cell clustering analysis and cell trajectory inference. Therefore, we assess the methods at both the protein and cell levels for the above evaluations to accommodate the varying requirements of different downstream tasks. Finally, we compare the methods in terms of usability metrics, including popularity (measured by the number of stars on their official GitHub repositories), user-friendliness (measured by the quality of installation procedures, codes, and tutorials), running time, and memory usage. Scenario 1: evaluating accuracy performance over random holdout To evaluate the performance of different imputation methods, we begin with a straightforward scenario where the training and test datasets are randomly divided from the same dataset. We utilize three widely referenced datasets: CITE-PBMC-Stoeckius , CITE-CBMC-Stoeckius , and CITE-BMMC-Stuart , which have been extensively used in previous studies assessing surface protein expression imputation methods . For each dataset, we randomly split the cells into two groups: a training dataset comprising half of the cells and a test dataset with the remaining half. The training dataset is used to train the models, and the test dataset is used to evaluate their performance. To account for variability in the dataset split, we repeat the experiment five times and present the results of each repetition using boxplots. Finally, in this scenario, we conduct a total of 15 experiments, consisting of three datasets, with five repeated experiments for each dataset. Figure a shows the median PCC of each method across proteins or cells in each replicate experiment, while the corresponding results evaluated using RMSE are presented in Additional file 2: Fig. S1. Most methods exhibit stable performance across different replicates, except for moETM, which appears sensitive to the split between training and test datasets. Notably, moETM demonstrates superior and stable performance with the CITE-CBMC-Stoeckius dataset but exhibits considerable performance fluctuations with the other two datasets, suggesting that its performance may heavily depend on the underlying dataset. The performance of each method also varies across datasets and evaluation metrics, with no clear overall winner. To summarize these results, we calculate the average of the 15 ARS values (from three datasets, five repetitions) at both the protein and cell levels. We find that cTP-net outperforms other methods at the protein level while achieving moderate performance at the cell level (Fig. b, c). Unlike cTP-net, which shows a preference for the protein level, Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) demonstrate competitive performance at both the protein and cell levels (Fig. b, c). Scenario 2: evaluating accuracy performance over different training data sizes We investigate the impact of training data size variations on the accuracy performance of imputation methods. Using the CITE-PBMC-Stoeckius, CITE-CBMC-Stoeckius, and CITE-BMMC-Stuart datasets, we first randomly split each dataset into training and test sets, following scenario 1. Subsequently, we down-sample the training dataset by removing cells at intervals of 10% from 0 to 90%, while keeping the test dataset constant. To address variability, we conduct five replicate experiments for each dataset. In total, we conduct 150 experiments in this scenario, using three datasets and performing five repeated experiments for each dataset across ten different down-sampling rates. Under each down-sampling rate, we first calculate the median PCC and RMSE across proteins or cells for each experiment, and then take the median of these values across five replicate experiments to obtain a robust performance measure, whose trends across different down-sampling rates are illustrated in Fig. a and Additional file 2: Fig. S2. As expected, imputation performance generally decreases as the training dataset size is reduced. Notably, methods such as Seurat v3 (CCA), Seurat v4 (CCA), and Seurat v4 (PCA) show relative insensitivity to training data size variations, maintaining robust performance. In contrast, deep learning-based methods like scMM, scMOG, and moETM, which perform poorly initially, are more sensitive to reductions in training data size. TotalVI also exhibits some sensitivity at the protein level. This sensitivity may be due to the larger training datasets required by deep learning models for optimal performance. To comprehensively rank the twelve imputation methods, we calculate the average of the 150 ARS values (from three datasets, five repetitions, and ten down-sampling rates) at both the protein and cell levels. cTP-net, Seurat v4 (PCA), and Seurat v4 (CCA) demonstrate the best performance across various down-sampling rates at the protein level (Fig. b). At the cell level, Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) outperform other methods (Fig. c). Scenario 3: evaluating accuracy performance over different samples In this scenario, we evaluate the performance of imputation methods when the training and test datasets originate from different samples, reflecting common real-world conditions. We use three datasets: CITE-PBMC-Li , CITE-SLN111-Gayoso , and CITE-SLN208-Gayoso . The CITE-PBMC-Li dataset includes data from eight volunteers measured before and after HIV vaccination. To eliminate potential batch differences from biological variation, we use only pre-vaccination data. The volunteers are randomly assigned to two non-overlapping groups: group 1, consisting of four volunteers, and group 2, comprising the remaining four. We conduct two complementary experiments, alternating between using one group as the training set and the other as the test set. To account for randomness, we repeat the group assignments five times and conduct the experiments for each random division. The CITE-SLN111-Gayoso and CITE-SLN208-Gayoso datasets contain data from the spleen and lymph node tissues of two mice. For each dataset, we perform two complementary experiments, alternating between using one mouse as the training set and the other as the test set. In total, 14 experiments are conducted in this scenario. For the CITE-PBMC-Li dataset, two complementary experiments are performed with five repetitions, while for the CITE-SLN111-Gayoso and CITE-SLN208-Gayoso datasets, two complementary experiments are conducted for each. A comparison of the evaluation results from experiments involving different datasets reveals significant differences. In experiments involving the CITE-PBMC-Li dataset, moETM consistently achieves the best performance in protein-level evaluation metrics (Fig. a and Additional file 2: Fig. S3a). However, no single method consistently outperforms others at the cell level, with TotalVI, Seurat v3 (PCA), and scMoGNN each demonstrating their respective strengths (Fig. a and Additional file 2: Fig. S3a). Boxplots in Additional file 2: Fig. S4 are based on the median evaluation metric value across proteins or cells of each repetition, showing the performance of each method across different random divisions. We observe that most methods exhibit relatively stable performance, with the aforementioned methods consistently maintaining their respective advantages. In the CITE-SLN111-Gayoso dataset, TotalVI and Seurat-based methods excel at the protein and cell levels, respectively (Fig. b and Additional file 2: Fig. S3b). In the CITE-SLN208-Gayoso dataset, TotalVI leads at both the protein level and for PCC at the cell level (Fig. c and Additional file 2: Fig. S3c). To summarize, we evaluate the methods’ performance at the protein and cell levels by averaging the six ARS values (from three datasets, two complementary experiments per dataset). To account for the potential impact of varying numbers of experiments across datasets on the overall results, the ARS for the CITE-PBMC-Li: Group1 [12pt]{minimal} $$$$ → Group2 and CITE-PBMC-Li: Group2 [12pt]{minimal} $$$$ → Group1 experiments are calculated using the median evaluation metric values across five repetitions. moETM, TotalVI, and scMoGNN show superior performance at the protein level (Fig. d). Seurat-based methods consistently demonstrate superior performance when focusing on the accuracy of protein abundances at the cell level (Fig. e). Scenario 4: evaluating accuracy performance over different tissues We assess the performance of the methods when the training and test datasets are derived from different tissues. We utilize three datasets: CITE-BMMC-Stuart (bone marrow mononuclear cells), CITE-CBMC-Stoeckius (cord blood mononuclear cells), and CITE-PBMC-Stoeckius (peripheral blood mononuclear cells), each representing cells from distinct but related blood sources . Each of these datasets is paired with one another, resulting in six experiments where each dataset is alternately used as the training and test dataset. Summarizing the results of these six experiments (Fig. a), we observe variability in benchmark results across different assessment metrics. Specifically, for metrics at the protein level, Seurat-based methods generally lead in performance except in the BMMC [12pt]{minimal} $$$$ → PBMC and CBMC [12pt]{minimal} $$$$ → PBMC experiments, where scMoGNN and cTP-net outperform other methods, respectively. For PCC at the cell level, sciPENN shows superior performance, except in the CBMC [12pt]{minimal} $$$$ → PBMC and PBMC [12pt]{minimal} $$$$ → CBMC experiments, where TotalVI and Seurat v4 (PCA) perform best, respectively. Seurat-based methods consistently demonstrate superior performance in RMSE at the cell level across all experiments. An interesting observation is that protein-level metrics are more sensitive to the direction of data migration. The leading methods achieve higher PCC values and lower RMSE values in the BMMC [12pt]{minimal} $$$$ → CBMC and PBMC [12pt]{minimal} $$$$ → CBMC experiments compared to their respective complementary experiments. To summarize the results across all six experiments using average ARS values, Seurat v4 (PCA), Seurat v3 (PCA), and Seurat v3 (CCA) exhibit superior performance for protein-level metrics (Fig. b). Seurat v3 (PCA), Seurat v3 (CCA), and Seurat v4 (CCA) lead in performance for cell-level metrics (Fig. c). Scenario 5: evaluating accuracy performance over different clinical states In this scenario, we assess the ability of the methods to transfer between datasets with biological variations. We use three datasets: CITE-PBMC-Haniffa , CITE-PBMC-Sanger , and CITE-PBMC-Li. The first two datasets are related to COVID-19, while the last one pertains to human immunodeficiency virus (HIV). The CITE-PBMC-Haniffa dataset includes data from volunteers with varying illness severity, healthy volunteers, and patients with severe non-COVID-19 respiratory illnesses. We design two experiments: one using data from healthy volunteers to infer data from critical patients, and another using data from non-COVID-19 acute respiratory disease patients to infer data from asymptomatic individuals. For benchmarking, we randomly select five samples each from the healthy volunteer and critical patient groups due to their large data size. To minimize the influence of randomness on the benchmark results, we perform five repetitions of the experiment. The CITE-PBMC-Sanger dataset categorizes patients by treatment severity. We first use data from asymptomatic patients not requiring oxygen therapy as the training dataset and data from symptomatic patients not requiring oxygen therapy as the test dataset. Next, we use data from symptomatic patients not requiring oxygen therapy as the training dataset and data from symptomatic patients requiring extracorporeal membrane oxygenation (ECMO) therapy as the test dataset. The CITE-PBMC-Li dataset includes data from eight volunteers before and after HIV vaccination. We design two experiments: one using pre-vaccination data (Day0) as the training set and data from the third day post-vaccination (Day3) as the test set, and the other using Day0 data as the training set and data from the seventh day post-vaccination (Day7) as the test set. In the CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day3 experiment, we randomly select data from four volunteers before vaccination as the training set, and use data from the remaining four volunteers collected on the third day post-vaccination as the test set. The same experimental setup is also applied in the CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day7 experiment. To reduce the impact of randomness in training and test set partitioning on the benchmark results, we perform five repetitions for each experiment. In total, 18 experiments are conducted. Among these, the experiments involving CITE-PBMC-Haniffa: Healthy [12pt]{minimal} $$$$ → Critical, CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day3, and CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day7 are each repeated five times to account for sampling randomness. Benchmark results for protein-level metrics indicate that moETM consistently achieves superior performance across all experiments (Fig. a–c and Additional file 2: Fig. S5). Notably, in the four COVID-19 experiments, moETM significantly surpasses other methods, while in the remaining two experiments, scMoGNN demonstrates performance comparable to moETM. This trend remains consistent across repeated experiments (Additional file 2: Fig. S6). In this scenario, characterized by significant technical differences and biological variations, cTP-net’s performance decreases significantly compared to scenarios 1 and 2 (Figs. , , and Additional file 2: Figs. S1, S2, S5), highlighting its limitations in handling batch differences without correction. For cell-level metrics, the results vary across experiments (Fig. a–c and Additional file 2: Fig. S5). No single method achieves the best performance in all experiments, and the rankings of methods vary considerably. Finally, we employ the ARS to assess the overall performance of these methods in this scenario. To mitigate the impact of varying numbers of experiments on the evaluation results, for experiments with repetitions, the ARS is calculated based on the median evaluation metric values across five repetitions. Overall, the top three methods by ARS at the protein level are moETM, Seurat v3 (PCA), and scMoGNN (Fig. d). At the cell level, the top three methods are Seurat v3 (PCA), Seurat v4 (PCA), and scMoGNN (Fig. e). Scenario 6: evaluating accuracy performance over different protocols We delve deeper into the performance of imputation methods in the scenario where training and test datasets originate from different sequencing protocols. Four datasets are utilized: CITE-PBMC10K-10X , CITE-PBMC5K-10X , CITE-PBMC-Stoeckius, and REAP-PBMC-Peterson . The primary distinction between the first two datasets lies in their sequencing depths . For each pair of datasets, two experiments are conducted, alternating between using one dataset as the training dataset and the other as the test dataset. The latter two datasets differ in sequencing technologies. We also perform two experiments using these latter two datasets. Thus, a total of four experiments are conducted in this scenario. Upon summarizing the results of these experiments (Fig. a), we observe that Seurat-based methods consistently exhibit superior generalization capabilities across all experiments. Their performance remains among the top regardless of the evaluation metrics employed. Seurat v4 generally outperforms Seurat v3, except in the CITE [12pt]{minimal} $$$$ → REAP experiment. Notably, comparing the outcomes of experiments with reciprocal training and test datasets unveils an intriguing finding: leveraging the REAP-PBMC-Peterson dataset as the training dataset yields superior imputation performance compared to using the CITE-PBMC-Stoeckius dataset. Based on the average ARS values across all four experiments, Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (CCA) emerge as the top performers for protein-level metrics (Fig. b). Conversely, for cell-level metrics, the leading methods are Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) (Fig. c). Evaluating usability in terms of time and memory We evaluate the usability of different imputation methods in terms of time and memory. Using a computational platform with a 16,896 KB L3 Cache, 48 CPU cores, and an NVIDIA Tesla V100 GPU, we conduct experiments on the CITE-BMMC-Stuart dataset. Following the settings from scenario 2, we use various training data rates (from 10 to 100% in 10% intervals), where the training data rate is equivalent to 1 minus the down-sampling rate in scenario 2. To reduce biases caused by fluctuations in the experimental environment and enhance the reliability and robustness of the evaluation results, we perform five repeated experiments for each training data rate. From the running time trends shown in Fig. a, which is based on the medians of the repeated experiments, and the specific recorded values presented in Additional file 1: Table S3, several patterns emerge. cTP-net requires significantly more time than the other methods, often exceeding 11 h, mainly due to its data denoising process with SAVER-X . Other methods can be grouped into three categories based on their running times. TotalVI and scMOG have longer but relatively stable running times across different training data rates. In contrast, sciPENN, Babel, and moETM are the most time-efficient methods, completing tasks in under a minute. While their running times slightly increase with higher training data rates, they remain significantly faster than the other methods. The remaining methods show a clear increase in running time as the training data rate rises. Notably, Seurat v4 is slower than Seurat v3 at lower training data rates, likely due to its more complex preprocessing. However, as the training data rate increases, Seurat v3 becomes slower than Seurat v4, indicating greater sensitivity to training dataset size. Moreover, CCA is slower than PCA within Seurat. Additional file 1: Table S3 presents the detailed running times for each method across repeated experiments. Although variability is observed in some repetitions, the fluctuations remain consistently within a reasonable range. Regarding memory usage, as shown in Fig. b, which is based on the medians of the repeated experiments, and Additional file 1: Table S4, the methods can be divided into three groups. At higher training data rates, both scMOG and scMoGNN exceed 20 GB in memory usage, significantly surpassing the other methods, with scMoGNN showing a more pronounced increase compared to scMOG. The excessive memory usage of scMOG and scMoGNN may be attributed to the pretraining mechanism and the incorporation of graph structures, respectively. cTP-net uses between 10 and 20 GB, with usage increasing as the training data rate rises, likely due to data denoising. The remaining methods use less than 10 GB, with minor variations. Within Seurat, memory usage does not depend on the dimensionality reduction method but is slightly higher for Seurat v4 than Seurat v3. Additional file 1: Table S4 records the detailed memory usage for each method across repeated experiments. The results show that memory usage exhibits less fluctuation than running time across repetitions. Overall summary of benchmark results We summarize the performance of these methods across four primary dimensions: accuracy, sensitivity to training data size, robustness across experiments, and usability. The accuracy of each scenario is defined as the mean average rank score (ARS) values of different experiments within that scenario, while the overall accuracy is evaluated by the mean ARS values across all scenarios. Sensitivity to training data size is assessed using two metrics: rank score of increments of accuracy performance, which quantifies the variability of methods with changes in training data size, and average-increment composite score (AICS), which considers both the average performance of methods and their variability to training data size to reflect the effectiveness of models. This evaluation is conducted in scenario 2. Robustness across experiments is evaluated by the robustness composite score (RCS), which is calculated based on the ARS values from all experiments with technical and biological differences, indicating the stability and competitiveness of accuracy across these real-world-like experiments. These experiments are conducted on the scenarios of different samples, tissues, clinical states, and protocols, resembling experiments in real-world application scenarios. Both accuracy, sensitivity to training data size, and robustness across experiments are examined at both the protein and cell levels. Usability encompasses time, memory, and quality. For time and memory, we calculate both the mean and increment relative to the training data size using the results recorded in Fig. . These metrics provide insights into the efficiency of the methods and their variability to training data size, respectively. Quality is measured through popularity and user-friendliness. The popularity is represented by the number of stars on each method’s official GitHub repository (last updated on 15 December 2024). We score the user-friendliness of methods based on three aspects: installation, code, and tutorial. Each method starts with 5 points in each aspect, with points deducted for any identified issues. The user-friendliness score for each method is then calculated by summing the points across all three aspects. The overall benchmark results are summarized in Fig. and the accuracy evaluation results for specific scenarios are shown in Additional file 2: Fig. S7. Based on the results of our study, we draw several findings. In terms of accuracy, we observe that at the protein level, benchmark results vary across scenarios. Notably, cTP-net tends to show superior performance primarily in scenarios without batch differences (Additional file 2: Fig. S7b, left), likely because it transfers networks learned in the training dataset to the test dataset without performing batch correction. Conversely, moETM and scMoGNN perform well in scenarios with batch differences (Additional file 2: Fig. S7b, left), highlighting the strengths of joint representations and graph neural networks in handling such complexities. Seurat-based methods consistently are the top three methods in all scenarios except for different samples (Additional file 2: Fig. S7b, left), with Seurat v4 (PCA) leading overall (Fig. b, left). At the cell level, Seurat-based methods consistently show superior performance (Fig. b, right), utilizing mutual nearest neighbor cells to achieve accurate protein abundances in individual cells. Among these methods, PCA-based dimensionality reduction yields better results than CCA (Fig. b, right). Notably, in scenarios with biological variation embedded in batch differences, such as different clinical states, scMoGNN performs comparably to Seurat-based methods (Additional file 2: Fig. S7b, right), underscoring the advantages of higher-order topological relationships in complex batch differences. In terms of sensitivity to training data size, we find that at the protein level, cTP-net, Seurat v4 (PCA), and Seurat v4 (CCA) are the most effective (Fig. c, left). In Seurat-based methods, PCA-based dimensionality reduction exhibits greater variability to training data size compared to CCA (Fig. c, left). At the cell level, the most effective methods are Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) (Fig. c, right). Among these, Seurat v4 (PCA) consistently demonstrates excellent performance across various training dataset sizes (Fig. c, right). In contrast, the performance of the remaining two methods exhibits relatively greater variability to training data size (Fig. c, right). Further analysis of the AICS evaluation results under varying [12pt]{minimal} $$ _{ {ai}}$$ ω ai settings indicates that the results remain relatively stable when [12pt]{minimal} $$ _{ {ai}}$$ ω ai exceeds 0.5, especially for the top-performing methods (Additional file 1: Tables S5, S6). The aforementioned evaluation results can assist users in considering the training data size when selecting methods. In experiments with technical and biological differences, at the protein level, methods such as Seurat v4 (PCA) and Seurat v3 (PCA), which achieve excellent accuracy, also tend to be relatively robust (Fig. d, left and Additional file 2: Fig. S7b, left). However, exceptions exist, such as moETM, which exhibits high accuracy only in the scenarios of different samples and clinical states, resulting in less robust performance across all scenarios (Fig. d, left and Additional file 2: Fig. S7b, left). At the cell level, Seurat v3 (PCA), Seurat v3 (CCA), and Seurat v4 (PCA) outperform other methods and also consistently demonstrate superior accuracy across most scenarios (Fig. d, right and Additional file 2: Fig. S7b, right). Notably, while Seurat v4 (CCA) slightly outperforms Seurat v3 (CCA) in accuracy evaluations, it is less competitive in robustness assessments (Fig. d, right and Additional file 2: Fig. S7b, right). Further analysis of the RCS evaluation results under different [12pt]{minimal} $$ _{ {ms}}$$ ω ms settings reveals that when [12pt]{minimal} $$ _{ {ms}}$$ ω ms is greater than 0.5, the RCS evaluation results remain relatively stable, particularly for the top-performing methods (Additional file 1: Tables S7, S8). The robustness assessment results in experiments closely resembling real-world scenarios can serve as a supplementary guide for users when selecting methods for specific scenarios. Regarding usability, we first evaluate efficiency based on running time and memory usage. We find that cTP-net and scMoGNN, despite high accuracy, are less efficient in terms of time and memory (Fig. e, left and middle and Additional file 1: Tables S3, S4). Conversely, among the methods with relatively excellent accuracy performance, moETM is the most time-efficient and exhibits the least variability to training data size (Fig. e, left and Additional file 1: Table S3). Seurat-based methods are the most memory-efficient and show the less variability to training data size (Fig. e, middle and Additional file 1: Table S4). However, they have longer running times compared to some deep learning-based methods, and the running time increases significantly with the growth of the training data size. Regarding popularity, Seurat-based methods dominate, likely due to Seurat’s multifunctional suite for single-cell data analyses (Fig. e, right and Additional file 1: Table S9). In terms of user-friendliness, the Seurat-based methods are also leading, followed by TotalVI and sciPENN (Fig. e, right, Additional file 1: Table S10). These three methods consistently achieve high scores across the aspects of installation, code, and tutorial, whereas other methods exhibit more issues in one or more of these aspects. Upon comprehensive evaluation, Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), emerge as the most favorable options, demonstrating superior accuracy and robustness across diverse experiments, and showing relative insensitivity to training data size. Their ability to handle various sources of single-cell data effectively, while maintaining memory efficiency and user-friendly features, makes them top choices for the surface protein expression imputation task. However, they exhibit longer running times compared to some deep learning-based methods, highlighting scalability concerns and underscoring the necessity for future enhancements to effectively manage larger datasets. Decision-tree-style guidance scheme for method selection Furthermore, we provide users scenario-specific method recommendations in the form of a decision tree (Fig. ). This concise and intuitive scheme is designed to help users in identifying the most suitable methods for each specific scenario. Each branch of the decision tree represents a distinct experimental scenario evaluated in our study. For each scenario, we recommend three methods based on ARS evaluation results for both the protein and cell levels (as described in Additional file 2: Fig. S7), catering to diverse downstream experimental needs. As shown in our overall evaluation results (Fig. ), Seurat v4 (PCA) and Seurat v3 (PCA) are the recommended methods in most scenarios. However, exceptions exist in certain cases, highlighting that some methods perform better in specific scenarios, thus expanding the range of choices available to users. For example, when prioritizing protein-level accuracy, cTP-net is the most recommended method in scenario without batch differences. In scenario with different samples, moETM, TotalVI, and scMoGNN are recommended, while in scenario with different clinical states, moETM and scMoGNN are similarly preferred. When prioritizing cell-level accuracy, we also recommend scMoGNN in scenario involving different clinical states. In addition to the scenario-based method selection guidance scheme, we also provide a summary table in Additional file 1: Table S11, outlining the imputation strategy, strengths, weaknesses, and recommended application scenarios of each method, to help users better understand the differences between the methods. The overall pipeline of this benchmark study is illustrated in Fig. . In each experiment, we use one CITE-seq or REAP-seq dataset containing paired transcriptomic and proteomic data as the training data. For the test data, we mask the proteomic data from another CITE-seq or REAP-seq dataset, retaining only the transcriptomic data to simulate scRNA-seq data, and then use various imputation methods to predict the corresponding proteomic data (Fig. a). To comprehensively evaluate the performance of these imputation methods, our benchmark includes twelve state-of-the-art methods: four Seurat-based methods (Seurat v3 (CCA), Seurat v3 (PCA), Seurat v4 (CCA), and Seurat v4 (PCA)), cTP-net, sciPENN, scMOG, scMoGNN, TotalVI, Babel, moETM, and scMM. These methods are categorized based on their imputation strategies (Fig. b): imputing by mutual nearest neighbors, imputing by learning a mapping between transcriptomic and proteomic data using deep learning, and imputing by learning a joint latent representation using an encoder-decoder framework. To test the generalizability and robustness of these imputation methods, we use eleven datasets and conduct experiments under six distinct benchmark scenarios (Fig. b and Additional file 1: Tables S1, S2): (1) Random holdout: A dataset is randomly divided into training and test sets to address the case without technical or biological differences; (2) Different training data sizes: Evaluating performance with varying training data sizes to understand how training data size influences each method; (3) Different samples: Considering the scenario where the training and test datasets come from different samples; (4) Different tissues: Testing each method’s generalizability when predicting protein expression for cells from tissues different from those used in the training set; (5) Different clinical states: Assessing each method’s ability to transfer between datasets with biological variations; (6) Different protocols: Investigating performance when training and test datasets are derived from different sequencing protocols. After generating imputation values using different methods (Fig. c), we design a comprehensive framework to evaluate their performance (Fig. d). First, we evaluate the accuracy of methods using Pearson correlation coefficient (PCC) and root mean square error (RMSE). To provide an overall performance metric, we also introduce an average rank score (ARS) that combines the rank score values of methods based on PCC and RMSE. A higher ARS value indicates better accuracy performance across all metrics in the experiment. Second, we assess how the methods’ accuracy performance changes with varying training data sizes by running the methods on training sets of different sample sizes. This analysis helps to understand how the amount of training data influences the methods’ accuracy performance. Third, we evaluate the robustness of methods across experiments by introducing a robustness composite score (RCS). This metric considers both the mean and standard deviation of the ARS values across different experiments. We primarily evaluate experiments demonstrating technical and biological differences that closely resemble those conducted in real-world applications. These experiments stem from scenarios involving different samples, tissues, clinical states, and protocols. A high RCS value indicates that a method not only performs well on average but also maintains consistent performance across all experiments with technical and biological differences. Accurate protein abundances across cells are crucial for tasks such as differential expression analysis and omics feature correlation analysis, while accurate protein abundances in individual cells are essential for tasks like cell clustering analysis and cell trajectory inference. Therefore, we assess the methods at both the protein and cell levels for the above evaluations to accommodate the varying requirements of different downstream tasks. Finally, we compare the methods in terms of usability metrics, including popularity (measured by the number of stars on their official GitHub repositories), user-friendliness (measured by the quality of installation procedures, codes, and tutorials), running time, and memory usage. To evaluate the performance of different imputation methods, we begin with a straightforward scenario where the training and test datasets are randomly divided from the same dataset. We utilize three widely referenced datasets: CITE-PBMC-Stoeckius , CITE-CBMC-Stoeckius , and CITE-BMMC-Stuart , which have been extensively used in previous studies assessing surface protein expression imputation methods . For each dataset, we randomly split the cells into two groups: a training dataset comprising half of the cells and a test dataset with the remaining half. The training dataset is used to train the models, and the test dataset is used to evaluate their performance. To account for variability in the dataset split, we repeat the experiment five times and present the results of each repetition using boxplots. Finally, in this scenario, we conduct a total of 15 experiments, consisting of three datasets, with five repeated experiments for each dataset. Figure a shows the median PCC of each method across proteins or cells in each replicate experiment, while the corresponding results evaluated using RMSE are presented in Additional file 2: Fig. S1. Most methods exhibit stable performance across different replicates, except for moETM, which appears sensitive to the split between training and test datasets. Notably, moETM demonstrates superior and stable performance with the CITE-CBMC-Stoeckius dataset but exhibits considerable performance fluctuations with the other two datasets, suggesting that its performance may heavily depend on the underlying dataset. The performance of each method also varies across datasets and evaluation metrics, with no clear overall winner. To summarize these results, we calculate the average of the 15 ARS values (from three datasets, five repetitions) at both the protein and cell levels. We find that cTP-net outperforms other methods at the protein level while achieving moderate performance at the cell level (Fig. b, c). Unlike cTP-net, which shows a preference for the protein level, Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) demonstrate competitive performance at both the protein and cell levels (Fig. b, c). We investigate the impact of training data size variations on the accuracy performance of imputation methods. Using the CITE-PBMC-Stoeckius, CITE-CBMC-Stoeckius, and CITE-BMMC-Stuart datasets, we first randomly split each dataset into training and test sets, following scenario 1. Subsequently, we down-sample the training dataset by removing cells at intervals of 10% from 0 to 90%, while keeping the test dataset constant. To address variability, we conduct five replicate experiments for each dataset. In total, we conduct 150 experiments in this scenario, using three datasets and performing five repeated experiments for each dataset across ten different down-sampling rates. Under each down-sampling rate, we first calculate the median PCC and RMSE across proteins or cells for each experiment, and then take the median of these values across five replicate experiments to obtain a robust performance measure, whose trends across different down-sampling rates are illustrated in Fig. a and Additional file 2: Fig. S2. As expected, imputation performance generally decreases as the training dataset size is reduced. Notably, methods such as Seurat v3 (CCA), Seurat v4 (CCA), and Seurat v4 (PCA) show relative insensitivity to training data size variations, maintaining robust performance. In contrast, deep learning-based methods like scMM, scMOG, and moETM, which perform poorly initially, are more sensitive to reductions in training data size. TotalVI also exhibits some sensitivity at the protein level. This sensitivity may be due to the larger training datasets required by deep learning models for optimal performance. To comprehensively rank the twelve imputation methods, we calculate the average of the 150 ARS values (from three datasets, five repetitions, and ten down-sampling rates) at both the protein and cell levels. cTP-net, Seurat v4 (PCA), and Seurat v4 (CCA) demonstrate the best performance across various down-sampling rates at the protein level (Fig. b). At the cell level, Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) outperform other methods (Fig. c). In this scenario, we evaluate the performance of imputation methods when the training and test datasets originate from different samples, reflecting common real-world conditions. We use three datasets: CITE-PBMC-Li , CITE-SLN111-Gayoso , and CITE-SLN208-Gayoso . The CITE-PBMC-Li dataset includes data from eight volunteers measured before and after HIV vaccination. To eliminate potential batch differences from biological variation, we use only pre-vaccination data. The volunteers are randomly assigned to two non-overlapping groups: group 1, consisting of four volunteers, and group 2, comprising the remaining four. We conduct two complementary experiments, alternating between using one group as the training set and the other as the test set. To account for randomness, we repeat the group assignments five times and conduct the experiments for each random division. The CITE-SLN111-Gayoso and CITE-SLN208-Gayoso datasets contain data from the spleen and lymph node tissues of two mice. For each dataset, we perform two complementary experiments, alternating between using one mouse as the training set and the other as the test set. In total, 14 experiments are conducted in this scenario. For the CITE-PBMC-Li dataset, two complementary experiments are performed with five repetitions, while for the CITE-SLN111-Gayoso and CITE-SLN208-Gayoso datasets, two complementary experiments are conducted for each. A comparison of the evaluation results from experiments involving different datasets reveals significant differences. In experiments involving the CITE-PBMC-Li dataset, moETM consistently achieves the best performance in protein-level evaluation metrics (Fig. a and Additional file 2: Fig. S3a). However, no single method consistently outperforms others at the cell level, with TotalVI, Seurat v3 (PCA), and scMoGNN each demonstrating their respective strengths (Fig. a and Additional file 2: Fig. S3a). Boxplots in Additional file 2: Fig. S4 are based on the median evaluation metric value across proteins or cells of each repetition, showing the performance of each method across different random divisions. We observe that most methods exhibit relatively stable performance, with the aforementioned methods consistently maintaining their respective advantages. In the CITE-SLN111-Gayoso dataset, TotalVI and Seurat-based methods excel at the protein and cell levels, respectively (Fig. b and Additional file 2: Fig. S3b). In the CITE-SLN208-Gayoso dataset, TotalVI leads at both the protein level and for PCC at the cell level (Fig. c and Additional file 2: Fig. S3c). To summarize, we evaluate the methods’ performance at the protein and cell levels by averaging the six ARS values (from three datasets, two complementary experiments per dataset). To account for the potential impact of varying numbers of experiments across datasets on the overall results, the ARS for the CITE-PBMC-Li: Group1 [12pt]{minimal} $$$$ → Group2 and CITE-PBMC-Li: Group2 [12pt]{minimal} $$$$ → Group1 experiments are calculated using the median evaluation metric values across five repetitions. moETM, TotalVI, and scMoGNN show superior performance at the protein level (Fig. d). Seurat-based methods consistently demonstrate superior performance when focusing on the accuracy of protein abundances at the cell level (Fig. e). We assess the performance of the methods when the training and test datasets are derived from different tissues. We utilize three datasets: CITE-BMMC-Stuart (bone marrow mononuclear cells), CITE-CBMC-Stoeckius (cord blood mononuclear cells), and CITE-PBMC-Stoeckius (peripheral blood mononuclear cells), each representing cells from distinct but related blood sources . Each of these datasets is paired with one another, resulting in six experiments where each dataset is alternately used as the training and test dataset. Summarizing the results of these six experiments (Fig. a), we observe variability in benchmark results across different assessment metrics. Specifically, for metrics at the protein level, Seurat-based methods generally lead in performance except in the BMMC [12pt]{minimal} $$$$ → PBMC and CBMC [12pt]{minimal} $$$$ → PBMC experiments, where scMoGNN and cTP-net outperform other methods, respectively. For PCC at the cell level, sciPENN shows superior performance, except in the CBMC [12pt]{minimal} $$$$ → PBMC and PBMC [12pt]{minimal} $$$$ → CBMC experiments, where TotalVI and Seurat v4 (PCA) perform best, respectively. Seurat-based methods consistently demonstrate superior performance in RMSE at the cell level across all experiments. An interesting observation is that protein-level metrics are more sensitive to the direction of data migration. The leading methods achieve higher PCC values and lower RMSE values in the BMMC [12pt]{minimal} $$$$ → CBMC and PBMC [12pt]{minimal} $$$$ → CBMC experiments compared to their respective complementary experiments. To summarize the results across all six experiments using average ARS values, Seurat v4 (PCA), Seurat v3 (PCA), and Seurat v3 (CCA) exhibit superior performance for protein-level metrics (Fig. b). Seurat v3 (PCA), Seurat v3 (CCA), and Seurat v4 (CCA) lead in performance for cell-level metrics (Fig. c). In this scenario, we assess the ability of the methods to transfer between datasets with biological variations. We use three datasets: CITE-PBMC-Haniffa , CITE-PBMC-Sanger , and CITE-PBMC-Li. The first two datasets are related to COVID-19, while the last one pertains to human immunodeficiency virus (HIV). The CITE-PBMC-Haniffa dataset includes data from volunteers with varying illness severity, healthy volunteers, and patients with severe non-COVID-19 respiratory illnesses. We design two experiments: one using data from healthy volunteers to infer data from critical patients, and another using data from non-COVID-19 acute respiratory disease patients to infer data from asymptomatic individuals. For benchmarking, we randomly select five samples each from the healthy volunteer and critical patient groups due to their large data size. To minimize the influence of randomness on the benchmark results, we perform five repetitions of the experiment. The CITE-PBMC-Sanger dataset categorizes patients by treatment severity. We first use data from asymptomatic patients not requiring oxygen therapy as the training dataset and data from symptomatic patients not requiring oxygen therapy as the test dataset. Next, we use data from symptomatic patients not requiring oxygen therapy as the training dataset and data from symptomatic patients requiring extracorporeal membrane oxygenation (ECMO) therapy as the test dataset. The CITE-PBMC-Li dataset includes data from eight volunteers before and after HIV vaccination. We design two experiments: one using pre-vaccination data (Day0) as the training set and data from the third day post-vaccination (Day3) as the test set, and the other using Day0 data as the training set and data from the seventh day post-vaccination (Day7) as the test set. In the CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day3 experiment, we randomly select data from four volunteers before vaccination as the training set, and use data from the remaining four volunteers collected on the third day post-vaccination as the test set. The same experimental setup is also applied in the CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day7 experiment. To reduce the impact of randomness in training and test set partitioning on the benchmark results, we perform five repetitions for each experiment. In total, 18 experiments are conducted. Among these, the experiments involving CITE-PBMC-Haniffa: Healthy [12pt]{minimal} $$$$ → Critical, CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day3, and CITE-PBMC-Li: Day0 [12pt]{minimal} $$$$ → Day7 are each repeated five times to account for sampling randomness. Benchmark results for protein-level metrics indicate that moETM consistently achieves superior performance across all experiments (Fig. a–c and Additional file 2: Fig. S5). Notably, in the four COVID-19 experiments, moETM significantly surpasses other methods, while in the remaining two experiments, scMoGNN demonstrates performance comparable to moETM. This trend remains consistent across repeated experiments (Additional file 2: Fig. S6). In this scenario, characterized by significant technical differences and biological variations, cTP-net’s performance decreases significantly compared to scenarios 1 and 2 (Figs. , , and Additional file 2: Figs. S1, S2, S5), highlighting its limitations in handling batch differences without correction. For cell-level metrics, the results vary across experiments (Fig. a–c and Additional file 2: Fig. S5). No single method achieves the best performance in all experiments, and the rankings of methods vary considerably. Finally, we employ the ARS to assess the overall performance of these methods in this scenario. To mitigate the impact of varying numbers of experiments on the evaluation results, for experiments with repetitions, the ARS is calculated based on the median evaluation metric values across five repetitions. Overall, the top three methods by ARS at the protein level are moETM, Seurat v3 (PCA), and scMoGNN (Fig. d). At the cell level, the top three methods are Seurat v3 (PCA), Seurat v4 (PCA), and scMoGNN (Fig. e). We delve deeper into the performance of imputation methods in the scenario where training and test datasets originate from different sequencing protocols. Four datasets are utilized: CITE-PBMC10K-10X , CITE-PBMC5K-10X , CITE-PBMC-Stoeckius, and REAP-PBMC-Peterson . The primary distinction between the first two datasets lies in their sequencing depths . For each pair of datasets, two experiments are conducted, alternating between using one dataset as the training dataset and the other as the test dataset. The latter two datasets differ in sequencing technologies. We also perform two experiments using these latter two datasets. Thus, a total of four experiments are conducted in this scenario. Upon summarizing the results of these experiments (Fig. a), we observe that Seurat-based methods consistently exhibit superior generalization capabilities across all experiments. Their performance remains among the top regardless of the evaluation metrics employed. Seurat v4 generally outperforms Seurat v3, except in the CITE [12pt]{minimal} $$$$ → REAP experiment. Notably, comparing the outcomes of experiments with reciprocal training and test datasets unveils an intriguing finding: leveraging the REAP-PBMC-Peterson dataset as the training dataset yields superior imputation performance compared to using the CITE-PBMC-Stoeckius dataset. Based on the average ARS values across all four experiments, Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (CCA) emerge as the top performers for protein-level metrics (Fig. b). Conversely, for cell-level metrics, the leading methods are Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) (Fig. c). We evaluate the usability of different imputation methods in terms of time and memory. Using a computational platform with a 16,896 KB L3 Cache, 48 CPU cores, and an NVIDIA Tesla V100 GPU, we conduct experiments on the CITE-BMMC-Stuart dataset. Following the settings from scenario 2, we use various training data rates (from 10 to 100% in 10% intervals), where the training data rate is equivalent to 1 minus the down-sampling rate in scenario 2. To reduce biases caused by fluctuations in the experimental environment and enhance the reliability and robustness of the evaluation results, we perform five repeated experiments for each training data rate. From the running time trends shown in Fig. a, which is based on the medians of the repeated experiments, and the specific recorded values presented in Additional file 1: Table S3, several patterns emerge. cTP-net requires significantly more time than the other methods, often exceeding 11 h, mainly due to its data denoising process with SAVER-X . Other methods can be grouped into three categories based on their running times. TotalVI and scMOG have longer but relatively stable running times across different training data rates. In contrast, sciPENN, Babel, and moETM are the most time-efficient methods, completing tasks in under a minute. While their running times slightly increase with higher training data rates, they remain significantly faster than the other methods. The remaining methods show a clear increase in running time as the training data rate rises. Notably, Seurat v4 is slower than Seurat v3 at lower training data rates, likely due to its more complex preprocessing. However, as the training data rate increases, Seurat v3 becomes slower than Seurat v4, indicating greater sensitivity to training dataset size. Moreover, CCA is slower than PCA within Seurat. Additional file 1: Table S3 presents the detailed running times for each method across repeated experiments. Although variability is observed in some repetitions, the fluctuations remain consistently within a reasonable range. Regarding memory usage, as shown in Fig. b, which is based on the medians of the repeated experiments, and Additional file 1: Table S4, the methods can be divided into three groups. At higher training data rates, both scMOG and scMoGNN exceed 20 GB in memory usage, significantly surpassing the other methods, with scMoGNN showing a more pronounced increase compared to scMOG. The excessive memory usage of scMOG and scMoGNN may be attributed to the pretraining mechanism and the incorporation of graph structures, respectively. cTP-net uses between 10 and 20 GB, with usage increasing as the training data rate rises, likely due to data denoising. The remaining methods use less than 10 GB, with minor variations. Within Seurat, memory usage does not depend on the dimensionality reduction method but is slightly higher for Seurat v4 than Seurat v3. Additional file 1: Table S4 records the detailed memory usage for each method across repeated experiments. The results show that memory usage exhibits less fluctuation than running time across repetitions. We summarize the performance of these methods across four primary dimensions: accuracy, sensitivity to training data size, robustness across experiments, and usability. The accuracy of each scenario is defined as the mean average rank score (ARS) values of different experiments within that scenario, while the overall accuracy is evaluated by the mean ARS values across all scenarios. Sensitivity to training data size is assessed using two metrics: rank score of increments of accuracy performance, which quantifies the variability of methods with changes in training data size, and average-increment composite score (AICS), which considers both the average performance of methods and their variability to training data size to reflect the effectiveness of models. This evaluation is conducted in scenario 2. Robustness across experiments is evaluated by the robustness composite score (RCS), which is calculated based on the ARS values from all experiments with technical and biological differences, indicating the stability and competitiveness of accuracy across these real-world-like experiments. These experiments are conducted on the scenarios of different samples, tissues, clinical states, and protocols, resembling experiments in real-world application scenarios. Both accuracy, sensitivity to training data size, and robustness across experiments are examined at both the protein and cell levels. Usability encompasses time, memory, and quality. For time and memory, we calculate both the mean and increment relative to the training data size using the results recorded in Fig. . These metrics provide insights into the efficiency of the methods and their variability to training data size, respectively. Quality is measured through popularity and user-friendliness. The popularity is represented by the number of stars on each method’s official GitHub repository (last updated on 15 December 2024). We score the user-friendliness of methods based on three aspects: installation, code, and tutorial. Each method starts with 5 points in each aspect, with points deducted for any identified issues. The user-friendliness score for each method is then calculated by summing the points across all three aspects. The overall benchmark results are summarized in Fig. and the accuracy evaluation results for specific scenarios are shown in Additional file 2: Fig. S7. Based on the results of our study, we draw several findings. In terms of accuracy, we observe that at the protein level, benchmark results vary across scenarios. Notably, cTP-net tends to show superior performance primarily in scenarios without batch differences (Additional file 2: Fig. S7b, left), likely because it transfers networks learned in the training dataset to the test dataset without performing batch correction. Conversely, moETM and scMoGNN perform well in scenarios with batch differences (Additional file 2: Fig. S7b, left), highlighting the strengths of joint representations and graph neural networks in handling such complexities. Seurat-based methods consistently are the top three methods in all scenarios except for different samples (Additional file 2: Fig. S7b, left), with Seurat v4 (PCA) leading overall (Fig. b, left). At the cell level, Seurat-based methods consistently show superior performance (Fig. b, right), utilizing mutual nearest neighbor cells to achieve accurate protein abundances in individual cells. Among these methods, PCA-based dimensionality reduction yields better results than CCA (Fig. b, right). Notably, in scenarios with biological variation embedded in batch differences, such as different clinical states, scMoGNN performs comparably to Seurat-based methods (Additional file 2: Fig. S7b, right), underscoring the advantages of higher-order topological relationships in complex batch differences. In terms of sensitivity to training data size, we find that at the protein level, cTP-net, Seurat v4 (PCA), and Seurat v4 (CCA) are the most effective (Fig. c, left). In Seurat-based methods, PCA-based dimensionality reduction exhibits greater variability to training data size compared to CCA (Fig. c, left). At the cell level, the most effective methods are Seurat v4 (PCA), Seurat v4 (CCA), and Seurat v3 (PCA) (Fig. c, right). Among these, Seurat v4 (PCA) consistently demonstrates excellent performance across various training dataset sizes (Fig. c, right). In contrast, the performance of the remaining two methods exhibits relatively greater variability to training data size (Fig. c, right). Further analysis of the AICS evaluation results under varying [12pt]{minimal} $$ _{ {ai}}$$ ω ai settings indicates that the results remain relatively stable when [12pt]{minimal} $$ _{ {ai}}$$ ω ai exceeds 0.5, especially for the top-performing methods (Additional file 1: Tables S5, S6). The aforementioned evaluation results can assist users in considering the training data size when selecting methods. In experiments with technical and biological differences, at the protein level, methods such as Seurat v4 (PCA) and Seurat v3 (PCA), which achieve excellent accuracy, also tend to be relatively robust (Fig. d, left and Additional file 2: Fig. S7b, left). However, exceptions exist, such as moETM, which exhibits high accuracy only in the scenarios of different samples and clinical states, resulting in less robust performance across all scenarios (Fig. d, left and Additional file 2: Fig. S7b, left). At the cell level, Seurat v3 (PCA), Seurat v3 (CCA), and Seurat v4 (PCA) outperform other methods and also consistently demonstrate superior accuracy across most scenarios (Fig. d, right and Additional file 2: Fig. S7b, right). Notably, while Seurat v4 (CCA) slightly outperforms Seurat v3 (CCA) in accuracy evaluations, it is less competitive in robustness assessments (Fig. d, right and Additional file 2: Fig. S7b, right). Further analysis of the RCS evaluation results under different [12pt]{minimal} $$ _{ {ms}}$$ ω ms settings reveals that when [12pt]{minimal} $$ _{ {ms}}$$ ω ms is greater than 0.5, the RCS evaluation results remain relatively stable, particularly for the top-performing methods (Additional file 1: Tables S7, S8). The robustness assessment results in experiments closely resembling real-world scenarios can serve as a supplementary guide for users when selecting methods for specific scenarios. Regarding usability, we first evaluate efficiency based on running time and memory usage. We find that cTP-net and scMoGNN, despite high accuracy, are less efficient in terms of time and memory (Fig. e, left and middle and Additional file 1: Tables S3, S4). Conversely, among the methods with relatively excellent accuracy performance, moETM is the most time-efficient and exhibits the least variability to training data size (Fig. e, left and Additional file 1: Table S3). Seurat-based methods are the most memory-efficient and show the less variability to training data size (Fig. e, middle and Additional file 1: Table S4). However, they have longer running times compared to some deep learning-based methods, and the running time increases significantly with the growth of the training data size. Regarding popularity, Seurat-based methods dominate, likely due to Seurat’s multifunctional suite for single-cell data analyses (Fig. e, right and Additional file 1: Table S9). In terms of user-friendliness, the Seurat-based methods are also leading, followed by TotalVI and sciPENN (Fig. e, right, Additional file 1: Table S10). These three methods consistently achieve high scores across the aspects of installation, code, and tutorial, whereas other methods exhibit more issues in one or more of these aspects. Upon comprehensive evaluation, Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), emerge as the most favorable options, demonstrating superior accuracy and robustness across diverse experiments, and showing relative insensitivity to training data size. Their ability to handle various sources of single-cell data effectively, while maintaining memory efficiency and user-friendly features, makes them top choices for the surface protein expression imputation task. However, they exhibit longer running times compared to some deep learning-based methods, highlighting scalability concerns and underscoring the necessity for future enhancements to effectively manage larger datasets. Furthermore, we provide users scenario-specific method recommendations in the form of a decision tree (Fig. ). This concise and intuitive scheme is designed to help users in identifying the most suitable methods for each specific scenario. Each branch of the decision tree represents a distinct experimental scenario evaluated in our study. For each scenario, we recommend three methods based on ARS evaluation results for both the protein and cell levels (as described in Additional file 2: Fig. S7), catering to diverse downstream experimental needs. As shown in our overall evaluation results (Fig. ), Seurat v4 (PCA) and Seurat v3 (PCA) are the recommended methods in most scenarios. However, exceptions exist in certain cases, highlighting that some methods perform better in specific scenarios, thus expanding the range of choices available to users. For example, when prioritizing protein-level accuracy, cTP-net is the most recommended method in scenario without batch differences. In scenario with different samples, moETM, TotalVI, and scMoGNN are recommended, while in scenario with different clinical states, moETM and scMoGNN are similarly preferred. When prioritizing cell-level accuracy, we also recommend scMoGNN in scenario involving different clinical states. In addition to the scenario-based method selection guidance scheme, we also provide a summary table in Additional file 1: Table S11, outlining the imputation strategy, strengths, weaknesses, and recommended application scenarios of each method, to help users better understand the differences between the methods. The emergence of CITE-seq and REAP-seq technologies has revolutionized our understanding of cellular heterogeneity by enabling simultaneous profiling of gene expression and surface protein expression at the single-cell level. However, widespread adoption of these technologies is hampered by technical challenges and high costs, leading to the limited availability of publicly accessible datasets for studying complex tissues. Leveraging machine learning methods to impute surface proteomic data from transcriptomic data presents a promising solution to this challenge, enabling the acquisition of paired multimodal datasets for comprehensive analysis. Despite the development of various computational methods for surface protein data imputation, a comprehensive evaluation of their performance remains elusive. In this benchmark study, we bridge this gap by assessing twelve state-of-the-art imputation methods across accuracy, sensitivity to training data size, robustness across experiments, and usability. Our findings unveil several key insights. Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), consistently exhibit competitive performance at both protein and cell levels (Fig. b and Additional file 2: Fig. S7b). In contrast, while other methods may excel at one level, their performance tends to falter at the other, with varying outcomes across different scenarios (Additional file 2: Fig. S7b). Sensitivity analysis reveals that Seurat-based methods are relatively insensitive to variations in training data size (Figs. , c), whereas other deep learning-based methods, such as scMM, scMOG, TotalVI, and moETM, display higher sensitivity to reductions in training data size (Figs. , c). Additionally, Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), demonstrate robustness across different experiments with technical and biological differences (Fig. d). Furthermore, efficiency analysis highlights moETM and Seurat-based methods as the most efficient and least variable options for time and memory, respectively, among the methods with relatively excellent accuracy performance (Figs. , e and Additional file 1: Tables S3, S4), making them appealing choices for practical applications. Overall, our findings underscore the exceptional performance of Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), across multiple metrics, coupled with their popularity and user-friendly features. While the results presented in this study are based on datasets with available surface protein ground truth for performance evaluation, we also conduct exploratory analyses on scenarios lacking ground truth. In the absence of ground truth, evaluating the validity of the imputed protein expression presents a challenge. To address this, we examine whether the clustering structure of cells is preserved between the transcriptomic and imputed proteomic data. In extensive experiments conducted without ground truth, we evaluate the consistency between the clustering derived from imputed proteomic data and transcriptomic data using the Adjusted Rand Index (ARI) (see Additional file 2: Supplementary note 1 for details). The findings reveal that Seurat-based methods consistently achieve high clustering concordance across the majority of datasets, while other methods exhibit greater variability in performance, indicating a lack of stability (Additional file 2: Figs. S8–S22). In the absence of surface protein ground truth, the validation results are consistent with those from the previous benchmark results, further underscoring the effectiveness of Seurat-based methods. However, we also note that Seurat-based methods, particularly those relying on Seurat v4, tend to exhibit longer running times compared to some deep learning-based methods, such as moETM (Figs. b, e, left and Additional file 1: Table S3). Furthermore, their running time increment relative to training data size is also comparatively larger (Figs. b, e, left and Additional file 1: Table S3), indicating potential scalability challenges with larger datasets. As datasets continue to grow exponentially, reaching sizes of millions or even larger, the feasibility of using Seurat-based methods may become limited. Therefore, there is an urgent need to enhance these methods to effectively handle large datasets . Additionally, the relatively less competitive performance of deep learning-based methods may partly result from insufficiently large training datasets. Addressing this limitation could involve developing more efficient and effective deep learning-based methods through pretraining and fine-tuning. For instance, pretraining on large-scale scRNA-seq data using self-supervised learning, followed by fine-tuning using paired data generated from CITE-seq and REAP-seq, could be a viable approach. One potential avenue is to adapt large language models like scGPT and Geneformer , pretrained on extensive scRNA-seq data, to predict surface protein expression based on gene expression data. In this study, we comprehensively evaluate twelve state-of-the-art imputation methods for surface protein expression, emphasizing accuracy, sensitivity to training data size, robustness across experiments, and usability. Seurat-based methods, particularly Seurat v4 (PCA) and Seurat v3 (PCA), stand out as the best performers, demonstrating competitive accuracy and robustness across experiments, and showing relative insensitivity to training dataset size, with memory-efficient and user-friendly features. However, these methods exhibit longer running times compared to certain deep learning-based approaches, highlighting scalability concerns and underscoring the necessity for future enhancements to manage larger datasets effectively. Dataset collection and quality control In this study, we employ eleven publicly available datasets for our benchmark analysis, each meticulously selected from reputable sources to ensure reliability and relevance. In addition, we select transcriptomic data of human peripheral blood mononuclear cells generated by seven different single-cell and single-nucleus RNA-sequencing (scRNA-seq and snRNA-seq) technologies from a systematic study to evaluate the imputation performance of methods in the absence of surface protein ground truth (see Additional file 2: Supplementary note 2 for details about the datasets). The datasets are named following a standardized convention that includes the sequencing technologies, tissues, and authors involved. These datasets encompass CITE-PBMC-Stoeckius , CITE-CBMC-Stoeckius , CITE-BMMC-Stuart , CITE-PBMC-Li , CITE-SLN111-Gayoso , CITE-SLN208-Gayoso , CITE-PBMC-Haniffa , CITE-PBMC-Sanger , CITE-PBMC10K-10X , CITE-PBMC5K-10X , REAP-PBMC-Peterson , CEL-PBMC-Ding , Drop-PBMC-Ding , inDrops-PBMC-Ding , SeqWell-PBMC-Ding , Smart-PBMC-Ding , 10xV2-PBMC-Ding , and 10xV3-PBMC-Ding . For the CITE-PBMC-Stoeckius and CITE-CBMC-Stoeckius datasets, which are generated from species-mixing experiments, we isolate human cells by filtering the datasets to include only those with more than 90% of UMI counts mapped to human genes . Subsequently, we remove low-quality genes (fewer than 10 counts across all cells) and low-quality cells (fewer than 200 genes detected) . These criteria are adopted from the original article and cTP-net . For the CITE-SLN111-Gayoso and CITE-SLN208-Gayoso datasets, which have isotype control antibodies and hashtag antibodies in their panels, we remove these antibodies in accordance with the original article . Quality control procedures for the REAP-PBMC-Peterson dataset adhere to the criteria outlined in the original article and cTP-net . Initially, we filter out cells with high mitochondrial gene expression (more than 20% counts from mitochondrial genes) and fewer than 250 genes detected . This is followed by the exclusion of low-quality genes (fewer than 10 counts across all cells) . For scRNA-seq and snRNA-seq datasets, we filter out low-quality genes within each experimental batch (see Additional file 2: Supplementary note 2), defined as those with fewer than 5 counts across all cells in the CEL-PBMC-Ding, SeqWell-PBMC-Ding (Experiment2), and Smart-PBMC-Ding datasets, or fewer than 10 in other datasets. For the remaining datasets, we utilize preprocessed data provided directly by the authors, ensuring consistency and reliability in our analysis. Detailed summaries of the datasets after quality control are presented in Additional file 1: Table S1. Method implementing details Seurat . We follow the tutorial on https://satijalab.org/seurat/articles/multimodal_reference_mapping . This tutorial is based on Seurat v4, with the preprocessing part for gene expression data using the SCTransform function. We also conduct experiments using the preprocessing steps described in the Seurat v3 paper . When performing dimensionality reduction of the gene expression data, both canonical correlation analysis (CCA) and principal component analysis (PCA) are recommended . We consider these two cases when conducting our experiments. We set the reduction parameter to cca or pcaproject in the FindTransferAnchors function. We use the TransferData function to transfer the surface protein data from the training dataset to the test dataset. We use the default settings for all other parameters. These four different methods are named Seurat v3 (CCA), Seurat v3 (PCA), Seurat v4 (CCA), and Seurat v4 (PCA). cTP-net . cTP-net consists of two steps. First, it uses SAVER-X to denoise the raw gene expression data and then predicts surface protein expression using the proposed cTP-net model. We follow the guidelines on the GitHub repository of SAVER-X ( https://github.com/jingshuw/SAVERX ) for denoising the raw gene expression data . After that, we use the code from https://github.com/zhouzilu/cTPnet/blob/master/extdata/training_05152020.py to learn the prediction model. We use the default settings for all parameters. sciPENN . We follow the tutorial provided on the GitHub repository of sciPENN: https://github.com/jlakkis/sciPENN . For experiments containing batch information within the training and test datasets, we pass the batch key information to the parameters train_batchkeys and test_batchkey of the sciPENN_API . We use the default settings for all other parameters. scMOG . We use the code available at https://github.com/GaoLabXDU/scMOG/blob/main/scMOG_code/bin/train_protein.py to train the model, and then utilize the code from https://github.com/GaoLabXDU/scMOG/blob/main/scMOG_code/bin/predict-protein.py for imputing the test dataset. All parameters are set to their default values. scMoGNN . We follow the tutorial available at https://github.com/openproblems-bio/neurips2021-notebooks/blob/main/notebooks/templates/NeurIPS_CITE_GEX_analysis.ipynb to preprocess the data . Subsequently, we utilize the code from https://github.com/OmicsML/dance/blob/main/examples/multi_modality/predict_modality/scmogcn.py for imputing surface protein expression. When dealing with experiments containing batch information within the training and test datasets, we set the parameter no_batch_features to False . Otherwise, we set it to True . All other parameters are kept at their default settings. TotalVI . We follow the tutorial provided on the scvi-tools website: https://docs.scvi-tools.org/en/stable/tutorials/notebooks/multimodal/cite_scrna_integration_w_totalVI.html . For experiments containing batch information within the training and test datasets, we pass the batch key information to the parameter batch_key in both the sc.pp.highly_variable_genes and scvi.model.TOTALVI.setup_anndata functions. Following the solution provided on https://github.com/scverse/scvi-tools/issues/1281 , in some experiments conducted in scenario 2, we adjust the parameter lr to [12pt]{minimal} $$4 10^{-4}$$ 4 × 10 - 4 in the model.train function. These experiments include replicate experiments 1, 3, and 4 under the down-sampling rate of 90%, replicate experiments 3, 4, and 5 under the down-sampling rate of 80%, replicate experiment 4 under the down-sampling rate of 50% in the CITE-BMMC-Stuart dataset, and all replicate experiments under the down-sampling rate of 0% in the CITE-PBMC-Stoeckius dataset. All other parameters are set to their default values. Babel . We follow the preprocessing steps in the original paper . Subsequently, we follow the tutorial on https://github.com/OmicsML/dance-tutorials/blob/main/dance_tutorial.ipynb to learn the prediction model . When the down-sampling rate of the CITE-PBMC-Stoeckius and CITE-CBMC-Stoeckius datasets is 90% in scenario 2, or when the training data rate of these two datasets is 10% in the “ ” section, we adjust the parameter batchsize to 32. All other parameters are kept at their default settings. moETM . We utilize the code from https://github.com/manqizhou/moETM/blob/main/dataloader.py to preprocess the data. Subsequently, we use the code from https://github.com/manqizhou/moETM/blob/main/main_cross_prediction_rna_protein.py for imputations. For experiments containing batch information within the training and test datasets, we incorporate this batch key information as additional inputs. All other parameters are kept at their default settings. scMM . We implement scMM using the code from https://github.com/OmicsML/dance/blob/main/examples/multi_modality/predict_modality/scmm.py . Following the solution provided at https://github.com/scverse/scanpy/issues/1504 , when the down-sampling rate of the CITE-PBMC-Stoeckius datasets is 90% in scenario 2, or the training data rate of this dataset is 10% in the “ ” section, we set the parameter span to 0.5 in the sc.pp.highly_variable_genes to select the highly variable genes. All other parameters are kept at their default settings. Benchmark metrics Metrics for evaluating accuracy of methods We devise a comprehensive assessment framework to quantitatively evaluate the accuracy performance of methods, encompassing three pivotal metrics: Pearson correlation coefficient (PCC), root mean square error (RMSE), and average rank score (ARS). PCC . PCC (Pearson correlation coefficient) gauges the degree of correlation between the predicted values and the ground truth. At the protein level, it is calculated as: 1 [12pt]{minimal} $$ r_p = ^N ( _{ip}-_p) ( Y_{ip}- _p) }{^N ( _{ip}-_p) ^2} ^N ( Y_{ip}- _p) ^2}} $$ r p = ∑ i = 1 N Y ^ ip - μ ^ p Y ip - μ p ∑ i = 1 N Y ^ ip - μ ^ p 2 · ∑ i = 1 N Y ip - μ p 2 where [12pt]{minimal} $$_{ip}$$ Y ^ ip and [12pt]{minimal} $$Y_{ip}$$ Y ip represent the predicted and true expressions of protein p in cell i , respectively. Similarly, [12pt]{minimal} $$_p$$ μ ^ p and [12pt]{minimal} $$ _p$$ μ p denote the mean predicted and true expressions across all cells for protein p respectively, with N denoting the total number of cells. Additionally, we evaluate the correlation at the cell level, denoted as [12pt]{minimal} $$r_i$$ r i , which is calculated as: 2 [12pt]{minimal} $$ r_i = ^P ( _{ip}-_i) ( Y_{ip}- _i) }{^P ( _{ip}-_i) ^2} ^P ( Y_{ip}- _i) ^2}} $$ r i = ∑ p = 1 P Y ^ ip - μ ^ i Y ip - μ i ∑ p = 1 P Y ^ ip - μ ^ i 2 · ∑ p = 1 P Y ip - μ i 2 where [12pt]{minimal} $$_i$$ μ ^ i and [12pt]{minimal} $$ _i$$ μ i represent the mean predicted and true expressions across all proteins for cell i respectively, and P represents the total number of proteins. RMSE . RMSE (root mean square error) quantifies the absolute difference in numerical magnitude between the predicted values and the ground truth. At the protein level, we initially standardize the predicted and true expressions using Z-score transformation for comparability. RMSE for protein p is then defined as: 3 [12pt]{minimal} $$ e_p = _{i=1}^N ( _{ip}^{'}-Y_{ip}^{'}) ^2} $$ e p = 1 N ∑ i = 1 N Y ^ ip ′ - Y ip ′ 2 where [12pt]{minimal} $$_{ip}^{'}$$ Y ^ ip ′ and [12pt]{minimal} $$Y_{ip}^{'}$$ Y ip ′ represent the Z-score standardized predicted and true expressions of protein p in cell i , respectively. We also compute RMSE at the cell level after performing [12pt]{minimal} $$ _2$$ ℓ 2 normalization across proteins for each cell, which is defined as: 4 [12pt]{minimal} $$ e_i = _{p=1}^{P}( _{ip}^{ }-Y_{ip}^{ }) ^2} $$ e i = 1 P ∑ p = 1 P Y ^ ip ″ - Y ip ″ 2 where [12pt]{minimal} $$_{ip}^{ }$$ Y ^ ip ″ and [12pt]{minimal} $$Y_{ip}^{ }$$ Y ip ″ represent the [12pt]{minimal} $$ _2$$ ℓ 2 normalized predicted and true expressions of protein p in cell i , respectively. ARS . We introduce ARS (average rank score) to conduct a comprehensive evaluation of methods, incorporating the aforementioned metrics. In each experiment, we calculate the four metrics for methods (PCC and RMSE values calculated respectively at the protein and cell levels), and rank the methods accordingly based on the median values of these metrics, where a method with better performance is assigned a higher rank score value. Given the rank scores based on PCC (denoted as PCC_RS) and RMSE (denoted as RMSE_RS), we define the ARS as follows: 5 [12pt]{minimal} $$ {ARS} ( {PCC}\_ {RS},\ {RMSE}\_ {RS}) = ( {PCC}\_ {RS} + {RMSE}\_ {RS}) $$ ARS PCC _ RS , RMSE _ RS = 1 2 PCC _ RS + RMSE _ RS Specifically, based on the rank scores [12pt]{minimal} $$ {PCC}\_ {RS}_{ {protein}}$$ PCC _ RS protein and [12pt]{minimal} $$ {RMSE}\_ {RS}_{ {protein}}$$ RMSE _ RS protein at the protein level, we can obtain the ARS at the protein level as follows: 6 [12pt]{minimal} $$ {ars}_{ {protein}} = {ARS} ( {PCC}\_ {RS}_{ {protein}}, {RMSE}\_ {RS}_{ {protein}}) $$ ars protein = ARS PCC _ RS protein , RMSE _ RS protein Similarly, we can obtain the ARS at the cell level as follows: 7 [12pt]{minimal} $$ {ars}_{ {cell}} = {ARS} ( {PCC}\_ {RS}_{ {cell}}, {RMSE}\_ {RS}_{ {cell}}) $$ ars cell = ARS PCC _ RS cell , RMSE _ RS cell where [12pt]{minimal} $$ {PCC}\_ {RS}_{ {cell}}$$ PCC _ RS cell and [12pt]{minimal} $$ {RMSE}\_ {RS}_{ {cell}}$$ RMSE _ RS cell are the rank scores of methods for PCC and RMSE metrics at the cell level, respectively. A higher ARS value indicates better accuracy performance across all metrics in the experiment. Metrics for evaluating the influences of training data size variations In evaluating the influences of training data size variations on methods’ accuracy performance, running time, and memory usage, we introduce the mean to evaluate methods in terms of average accuracy or efficiency, and the increment to assess methods in terms of variability. Additionally, in assessing the influences on methods’ accuracy performance, i.e., the sensitivity of methods to training data size, we propose the average-increment composite score (AICS) as a comprehensive measure that considers both average accuracy and variability to reflect the effectiveness of methods. Means of accuracy performance . We introduce means of accuracy performance to assess the average accuracy of methods across all training data sizes. In dataset d from scenario 2, for each down-sampling rate [12pt]{minimal} $$$$ π (where [12pt]{minimal} $$$$ π ranges from 0 to 90% in increments of 10%), [12pt]{minimal} $$ {PCC}_{ {protein}}^d( )$$ PCC protein d π and [12pt]{minimal} $$ {RMSE}_{ {protein}}^d( )$$ RMSE protein d π represent the median PCC and RMSE values across five replicate experiments at the protein level, respectively. The means of accuracy performance based on PCC and RMSE are defined as: 8 [12pt]{minimal} $$ }_{ {protein}}^{d} = _{ } {PCC}_{ {protein}}^d( ) $$ PCC ¯ protein d = 1 10 ∑ π PCC protein d π 9 [12pt]{minimal} $$ }_{ {protein}}^{d} = _{ } {RMSE}_{ {protein}}^d( ) $$ RMSE ¯ protein d = 1 10 ∑ π RMSE protein d π Similarly, for the median PCC and RMSE values across five replicate experiments at the cell level, we can calculate the mean values in dataset d from scenario 2, denoted as: 10 [12pt]{minimal} $$ }_{ {cell}}^{d} = _{ } {PCC}_{ {cell}}^d( ) $$ PCC ¯ cell d = 1 10 ∑ π PCC cell d π 11 [12pt]{minimal} $$ }_{ {cell}}^{d} = _{ } {RMSE}_{ {cell}}^d( ) $$ RMSE ¯ cell d = 1 10 ∑ π RMSE cell d π A higher mean value based on PCC or a lower mean value based on RMSE indicates better performance in terms of PCC or RMSE across all training data sizes in dataset d . Means of running time and memory usage . The means of running time ( [12pt]{minimal} $$$$ T ¯ ) and memory usage ( [12pt]{minimal} $$$$ M ¯ ) evaluate efficiency across all training data rates. For each rate [12pt]{minimal} $$$$ θ (where [12pt]{minimal} $$$$ θ is equivalent to 1 minus the down-sampling rate [12pt]{minimal} $$$$ π in scenario 2, ranging from 10 to 100% in increments of 10%), [12pt]{minimal} $$T( )$$ T ( θ ) and [12pt]{minimal} $$M( )$$ M ( θ ) represent the running time and memory usage, respectively. The means of running time and memory usage are computed as: 12 [12pt]{minimal} $$ = _{ }T( ) $$ T ¯ = 1 10 ∑ θ T θ 13 [12pt]{minimal} $$ = _{ }M( ) $$ M ¯ = 1 10 ∑ θ M θ A lower mean value indicates more efficiency in terms of time or memory. Increments of accuracy performance . We introduce increments of accuracy performance to assess the variability of methods to training data size in terms of accuracy. In dataset d from scenario 2, [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d}$$ Δ PCC protein d and [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d}$$ Δ RMSE protein d represent the increments based on PCC and RMSE, respectively. They are defined as the sum of the absolute differences over all adjacent down-sampling rates: 14 [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d} = _{ '} | {PCC}_ {protein}^d( '-10) - {PCC}_ {protein}^d( ') | $$ Δ PCC protein d = ∑ π ′ PCC protein d π ′ - 10 - PCC protein d π ′ 15 [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d} = _{ '} | {RMSE}_ {protein}^d( ') - {RMSE}_ {protein}^d( '-10) | $$ Δ RMSE protein d = ∑ π ′ RMSE protein d π ′ - RMSE protein d π ′ - 10 where [12pt]{minimal} $$ '$$ π ′ and [12pt]{minimal} $$ '-10$$ π ′ - 10 are the down-sampling rates, and [12pt]{minimal} $$ ' \{10\%, 20\%, , 90\%\}$$ π ′ ∈ { 10 % , 20 % , … , 90 % } . Similarly, we calculate the increment values at the cell level as: 16 [12pt]{minimal} $$ _{ {PCC}_{ {cell}}^d} = _{ '} | {PCC}_ {cell}^d( '-10) - {PCC}_ {cell}^d( ') | $$ Δ PCC cell d = ∑ π ′ PCC cell d π ′ - 10 - PCC cell d π ′ 17 [12pt]{minimal} $$ _{ {RMSE}_{ {cell}}^d} = _{ '} | {RMSE}_ {cell}^d( ') - {RMSE}_ {cell}^d( '-10) | $$ Δ RMSE cell d = ∑ π ′ RMSE cell d π ′ - RMSE cell d π ′ - 10 A lower increment value indicates less variability of accuracy performance in terms of PCC or RMSE to training data size in dataset d . Increments of running time and memory usage . The increments of running time ( [12pt]{minimal} $$ _{ {time}}$$ Δ time ) and memory usage ( [12pt]{minimal} $$ _{ {memory}}$$ Δ memory ) measure the variability of methods to training data rate in terms of time and memory. They are defined as the sum of the absolute differences over all adjacent training data rates: 18 [12pt]{minimal} $$ _{ {time}} = _{ '} | T( ') - T( '-10) | $$ Δ time = ∑ θ ′ T θ ′ - T θ ′ - 10 19 [12pt]{minimal} $$ _{ {memory}} = _{ '} | M( ') - M( '-10) | $$ Δ memory = ∑ θ ′ M θ ′ - M θ ′ - 10 where [12pt]{minimal} $$ '$$ θ ′ and [12pt]{minimal} $$ '-10$$ θ ′ - 10 are the training data rates, and [12pt]{minimal} $$ ' \{20\%, 30\%, , 100\%\}$$ θ ′ ∈ { 20 % , 30 % , … , 100 % } . A lower increment value indicates less variability to training data size in terms of time or memory. Rank score of means of accuracy performance . To consolidate the means of accuracy performance based on PCC and RMSE, as well as the results for different datasets in scenario 2, we introduce the rank score of means of accuracy performance. Firstly, at the protein level, for the dataset d in scenario 2, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{d}$$ PCC ¯ protein d and [12pt]{minimal} $$}_{ {protein}}^{d}$$ RMSE ¯ protein d , where a method with better performance is assigned a higher rank score value, denoted as [12pt]{minimal} $${ {PCC}\_ {RS}_{ {protein}}^{ {mean}}}_d$$ PCC _ RS protein mean d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {protein}}^{ {mean}}}_d$$ RMSE _ RS protein mean d , respectively. Subsequently, we can obtain the ARS based on these rank scores. Next, we average the ARS values across all datasets in this scenario, denoted as [12pt]{minimal} $$}_{ {protein}}^{ {mean}}$$ ars ¯ protein mean , which is defined as: 20 [12pt]{minimal} $$ }_{ {protein}}^{ {mean}} = _d {ARS}( { {PCC}\_ {RS}_{ {protein}}^{ {mean}}}_d, { {RMSE}\_ {RS}_{ {protein}}^{ {mean}}}_d) $$ ars ¯ protein mean = 1 D ∑ d ARS PCC _ RS protein mean d , RMSE _ RS protein mean d where d represents the datasets used in scenario 2: CITE-PBMC-Stoeckius, CITE-CBMC-Stoeckius, and CITE-BMMC-Stuart, and [12pt]{minimal} $$| D |$$ D denotes the total number of datasets, equal to 3 here. Similarly, we calculate the mean of ARS values across all datasets in this scenario at the cell level: 21 [12pt]{minimal} $$ }_{ {cell}}^{ {mean}} = _d {ARS}( { {PCC}\_ {RS}_{ {cell}}^{ {mean}}}_d, { {RMSE}\_ {RS}_{ {cell}}^{ {mean}}}_d) $$ ars ¯ cell mean = 1 D ∑ d ARS PCC _ RS cell mean d , RMSE _ RS cell mean d where [12pt]{minimal} $${ {PCC}\_ {RS}_{ {cell}}^{ {mean}}}_d$$ PCC _ RS cell mean d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {cell}}^{ {mean}}}_d$$ RMSE _ RS cell mean d are the rank scores of [12pt]{minimal} $$}_{ {cell}}^{d}$$ PCC ¯ cell d and [12pt]{minimal} $$}_{ {cell}}^{d}$$ RMSE ¯ cell d , respectively. Finally, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{ {mean}}$$ ars ¯ protein mean and [12pt]{minimal} $$}_{ {cell}}^{ {mean}}$$ ars ¯ cell mean , where a method with higher ARS value is assigned a higher rank score value, to obtain the rank scores of means of accuracy performance, which are denoted as [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {protein}}$$ MEAN _ RS protein and [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {cell}}$$ MEAN _ RS cell , respectively. A higher rank score of means value indicates better average accuracy performance across all training data sizes and datasets in scenario 2. Rank score of increments of accuracy performance . Similarly, we introduce the rank score of increments of accuracy performance to consolidate the increments based on PCC and RMSE across different datasets in scenario 2. Firstly, at the protein level, for the dataset d in scenario 2, we rank the methods accordingly based on the [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d}$$ Δ PCC protein d and [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d}$$ Δ RMSE protein d , where a method with lower increments is assigned a higher rank score value, denoted as [12pt]{minimal} $${ {PCC}\_ {RS}_{ {protein}}^{ }}_d$$ PCC _ RS protein Δ d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {protein}}^{ }}_d$$ RMSE _ RS protein Δ d , respectively. Subsequently, we can obtain the ARS based on these rank scores. Next, we average the ARS values across all datasets in this scenario, denoted as [12pt]{minimal} $$}_{ {protein}}^{ }$$ ars ¯ protein Δ , which is defined as: 22 [12pt]{minimal} $$ }_{ {protein}}^{ } = _d {ARS}( { {PCC}\_ {RS}_{ {protein}}^{ }}_d, { {RMSE}\_ {RS}_{ {protein}}^{ }}_d) $$ ars ¯ protein Δ = 1 D ∑ d ARS PCC _ RS protein Δ d , RMSE _ RS protein Δ d Similarly, we calculate the mean of ARS values across all datasets in this scenario at the cell level: 23 [12pt]{minimal} $$ }_{ {cell}}^{ } = _d {ARS}( { {PCC}\_ {RS}_{ {cell}}^{ }}_d, { {RMSE}\_ {RS}_{ {cell}}^{ }}_d) $$ ars ¯ cell Δ = 1 D ∑ d ARS PCC _ RS cell Δ d , RMSE _ RS cell Δ d where [12pt]{minimal} $${ {PCC}\_ {RS}_{ {cell}}^{ }}_d$$ PCC _ RS cell Δ d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {cell}}^{ }}_d$$ RMSE _ RS cell Δ d are the rank scores of [12pt]{minimal} $$ _{ {PCC}_{ {cell}}^d}$$ Δ PCC cell d and [12pt]{minimal} $$ _{ {RMSE}_{ {cell}}^d}$$ Δ RMSE cell d , respectively. Finally, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{ }$$ ars ¯ protein Δ and [12pt]{minimal} $$}_{ {cell}}^{ }$$ ars ¯ cell Δ , where a method with higher ARS value is assigned a higher rank score value, to obtain the rank scores of increments of accuracy performance, which are denoted as [12pt]{minimal} $${ }\_ {RS}_{ {protein}}$$ Δ _ RS protein and [12pt]{minimal} $${ \_ {RS}}_{ {cell}}$$ Δ _ RS cell , respectively. A higher rank score of increments value indicates less variability to training data size in terms of accuracy over all datasets in scenario 2. AICS . To comprehensively assess the sensitivity of methods to training data size, we introduce AICS (average-increment composite score). This metric evaluates sensitivity by not only focusing on the variability of accuracy performance, but also considering the average accuracy performance, and is defined as the weighted sum of the rank scores of means and increments of accuracy performance: 24 [12pt]{minimal} $$ {AICS}_{ {protein}} = _{ {ai}} {MEAN}\_ {RS}_{ {protein}} + ( 1- _{ {ai}}) { \_ {RS}}_{ {protein}} $$ AICS protein = ω ai MEAN _ RS protein + 1 - ω ai Δ _ RS protein 25 [12pt]{minimal} $$ {AICS}_{ {cell}} = _{ {ai}} {MEAN}\_ {RS}_{ {cell}} + ( 1- _{ {ai}}) { \_ {RS}}_{ {cell}} $$ AICS cell = ω ai MEAN _ RS cell + 1 - ω ai Δ _ RS cell where [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {protein}}$$ MEAN _ RS protein and [12pt]{minimal} $${ \_ {RS}}_{ {protein}}$$ Δ _ RS protein are the rank scores of means and increments of accuracy performance at the protein level, respectively. [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {cell}}$$ MEAN _ RS cell and [12pt]{minimal} $${ \_ {RS}}_{ {cell}}$$ Δ _ RS cell are the rank scores of means and increments of accuracy performance at the cell level, respectively. [12pt]{minimal} $$ _{ {ai}}$$ ω ai is a weight to balance the rank scores of means and increments values, and is recommended to be greater than 0.5, with a default setting of 0.8 (see Additional file 1: Tables S5, S6 for evaluation results under different [12pt]{minimal} $$ _{ {ai}}$$ ω ai settings ranging from 0 to 1 in steps of 0.1). A higher AICS value indicates more effectiveness across all training data sizes and datasets in scenario 2. Metrics for evaluating robustness of methods The robustness composite score (RCS) is employed to assess the robustness of methods’ accuracy across experiments with technical and biological differences, which is calculated based on the ARS values from all such experiments, thereby indicating the robustness of accuracy under real-world-like conditions. RCS . We introduce RCS (robustness composite score) to evaluate the robustness of ARS values of methods across different experiments with technical and biological differences. We calculate the mean and standard deviation of ARS values of methods across all these experiments and rank them accordingly. A method with a higher mean value or lower standard deviation value is assigned a higher rank score value. At the protein level, RCS is defined as: 26 [12pt]{minimal} $$ {RCS}_{ {protein}} = _{ {ms}} {ARS}\_ {RS}_{ {protein}}^{ {mean}} + ( 1- _{ {ms}}) {ARS}\_ {RS}_{ {protein}}^{ {std}} $$ RCS protein = ω ms ARS _ RS protein mean + 1 - ω ms ARS _ RS protein std where [12pt]{minimal} $$ {ARS}\_ {RS}_{ {protein}}^{ {mean}}$$ ARS _ RS protein mean and [12pt]{minimal} $$ {ARS}\_ {RS}_{ {protein}}^{ {std}}$$ ARS _ RS protein std denote the rank scores for the mean and standard deviation at the protein level, respectively. Similarly, we can calculate RCS at the cell level: 27 [12pt]{minimal} $$ {RCS}_{ {cell}} = _{ {ms}} {ARS}\_ {RS}_{ {cell}}^{ {mean}} + ( 1- _{ {ms}}) {ARS}\_ {RS}_{ {cell}}^{ {std}} $$ RCS cell = ω ms ARS _ RS cell mean + 1 - ω ms ARS _ RS cell std where [12pt]{minimal} $$ {ARS}\_ {RS}_{ {cell}}^{ {mean}}$$ ARS _ RS cell mean and [12pt]{minimal} $$ {ARS}\_ {RS}_{ {cell}}^{ {std}}$$ ARS _ RS cell std denote the rank scores for the mean and standard deviation at the cell level, respectively. [12pt]{minimal} $$ _{ {ms}}$$ ω ms is a weight to balance the rank scores of mean and standard deviation values, and is recommended to be greater than 0.5, with a default setting of 0.8 (see Additional file 1: Tables S7, S8 for evaluation results under different [12pt]{minimal} $$ _{ {ms}}$$ ω ms settings ranging from 0 to 1 in steps of 0.1). Note that, based on the definition of RCS, the robustness in this study is a comprehensive concept that considers both the stability and competitiveness of the methods. A higher RCS value indicates more robustness across different experiments with technical and biological differences. In this study, we employ eleven publicly available datasets for our benchmark analysis, each meticulously selected from reputable sources to ensure reliability and relevance. In addition, we select transcriptomic data of human peripheral blood mononuclear cells generated by seven different single-cell and single-nucleus RNA-sequencing (scRNA-seq and snRNA-seq) technologies from a systematic study to evaluate the imputation performance of methods in the absence of surface protein ground truth (see Additional file 2: Supplementary note 2 for details about the datasets). The datasets are named following a standardized convention that includes the sequencing technologies, tissues, and authors involved. These datasets encompass CITE-PBMC-Stoeckius , CITE-CBMC-Stoeckius , CITE-BMMC-Stuart , CITE-PBMC-Li , CITE-SLN111-Gayoso , CITE-SLN208-Gayoso , CITE-PBMC-Haniffa , CITE-PBMC-Sanger , CITE-PBMC10K-10X , CITE-PBMC5K-10X , REAP-PBMC-Peterson , CEL-PBMC-Ding , Drop-PBMC-Ding , inDrops-PBMC-Ding , SeqWell-PBMC-Ding , Smart-PBMC-Ding , 10xV2-PBMC-Ding , and 10xV3-PBMC-Ding . For the CITE-PBMC-Stoeckius and CITE-CBMC-Stoeckius datasets, which are generated from species-mixing experiments, we isolate human cells by filtering the datasets to include only those with more than 90% of UMI counts mapped to human genes . Subsequently, we remove low-quality genes (fewer than 10 counts across all cells) and low-quality cells (fewer than 200 genes detected) . These criteria are adopted from the original article and cTP-net . For the CITE-SLN111-Gayoso and CITE-SLN208-Gayoso datasets, which have isotype control antibodies and hashtag antibodies in their panels, we remove these antibodies in accordance with the original article . Quality control procedures for the REAP-PBMC-Peterson dataset adhere to the criteria outlined in the original article and cTP-net . Initially, we filter out cells with high mitochondrial gene expression (more than 20% counts from mitochondrial genes) and fewer than 250 genes detected . This is followed by the exclusion of low-quality genes (fewer than 10 counts across all cells) . For scRNA-seq and snRNA-seq datasets, we filter out low-quality genes within each experimental batch (see Additional file 2: Supplementary note 2), defined as those with fewer than 5 counts across all cells in the CEL-PBMC-Ding, SeqWell-PBMC-Ding (Experiment2), and Smart-PBMC-Ding datasets, or fewer than 10 in other datasets. For the remaining datasets, we utilize preprocessed data provided directly by the authors, ensuring consistency and reliability in our analysis. Detailed summaries of the datasets after quality control are presented in Additional file 1: Table S1. Seurat . We follow the tutorial on https://satijalab.org/seurat/articles/multimodal_reference_mapping . This tutorial is based on Seurat v4, with the preprocessing part for gene expression data using the SCTransform function. We also conduct experiments using the preprocessing steps described in the Seurat v3 paper . When performing dimensionality reduction of the gene expression data, both canonical correlation analysis (CCA) and principal component analysis (PCA) are recommended . We consider these two cases when conducting our experiments. We set the reduction parameter to cca or pcaproject in the FindTransferAnchors function. We use the TransferData function to transfer the surface protein data from the training dataset to the test dataset. We use the default settings for all other parameters. These four different methods are named Seurat v3 (CCA), Seurat v3 (PCA), Seurat v4 (CCA), and Seurat v4 (PCA). cTP-net . cTP-net consists of two steps. First, it uses SAVER-X to denoise the raw gene expression data and then predicts surface protein expression using the proposed cTP-net model. We follow the guidelines on the GitHub repository of SAVER-X ( https://github.com/jingshuw/SAVERX ) for denoising the raw gene expression data . After that, we use the code from https://github.com/zhouzilu/cTPnet/blob/master/extdata/training_05152020.py to learn the prediction model. We use the default settings for all parameters. sciPENN . We follow the tutorial provided on the GitHub repository of sciPENN: https://github.com/jlakkis/sciPENN . For experiments containing batch information within the training and test datasets, we pass the batch key information to the parameters train_batchkeys and test_batchkey of the sciPENN_API . We use the default settings for all other parameters. scMOG . We use the code available at https://github.com/GaoLabXDU/scMOG/blob/main/scMOG_code/bin/train_protein.py to train the model, and then utilize the code from https://github.com/GaoLabXDU/scMOG/blob/main/scMOG_code/bin/predict-protein.py for imputing the test dataset. All parameters are set to their default values. scMoGNN . We follow the tutorial available at https://github.com/openproblems-bio/neurips2021-notebooks/blob/main/notebooks/templates/NeurIPS_CITE_GEX_analysis.ipynb to preprocess the data . Subsequently, we utilize the code from https://github.com/OmicsML/dance/blob/main/examples/multi_modality/predict_modality/scmogcn.py for imputing surface protein expression. When dealing with experiments containing batch information within the training and test datasets, we set the parameter no_batch_features to False . Otherwise, we set it to True . All other parameters are kept at their default settings. TotalVI . We follow the tutorial provided on the scvi-tools website: https://docs.scvi-tools.org/en/stable/tutorials/notebooks/multimodal/cite_scrna_integration_w_totalVI.html . For experiments containing batch information within the training and test datasets, we pass the batch key information to the parameter batch_key in both the sc.pp.highly_variable_genes and scvi.model.TOTALVI.setup_anndata functions. Following the solution provided on https://github.com/scverse/scvi-tools/issues/1281 , in some experiments conducted in scenario 2, we adjust the parameter lr to [12pt]{minimal} $$4 10^{-4}$$ 4 × 10 - 4 in the model.train function. These experiments include replicate experiments 1, 3, and 4 under the down-sampling rate of 90%, replicate experiments 3, 4, and 5 under the down-sampling rate of 80%, replicate experiment 4 under the down-sampling rate of 50% in the CITE-BMMC-Stuart dataset, and all replicate experiments under the down-sampling rate of 0% in the CITE-PBMC-Stoeckius dataset. All other parameters are set to their default values. Babel . We follow the preprocessing steps in the original paper . Subsequently, we follow the tutorial on https://github.com/OmicsML/dance-tutorials/blob/main/dance_tutorial.ipynb to learn the prediction model . When the down-sampling rate of the CITE-PBMC-Stoeckius and CITE-CBMC-Stoeckius datasets is 90% in scenario 2, or when the training data rate of these two datasets is 10% in the “ ” section, we adjust the parameter batchsize to 32. All other parameters are kept at their default settings. moETM . We utilize the code from https://github.com/manqizhou/moETM/blob/main/dataloader.py to preprocess the data. Subsequently, we use the code from https://github.com/manqizhou/moETM/blob/main/main_cross_prediction_rna_protein.py for imputations. For experiments containing batch information within the training and test datasets, we incorporate this batch key information as additional inputs. All other parameters are kept at their default settings. scMM . We implement scMM using the code from https://github.com/OmicsML/dance/blob/main/examples/multi_modality/predict_modality/scmm.py . Following the solution provided at https://github.com/scverse/scanpy/issues/1504 , when the down-sampling rate of the CITE-PBMC-Stoeckius datasets is 90% in scenario 2, or the training data rate of this dataset is 10% in the “ ” section, we set the parameter span to 0.5 in the sc.pp.highly_variable_genes to select the highly variable genes. All other parameters are kept at their default settings. Metrics for evaluating accuracy of methods We devise a comprehensive assessment framework to quantitatively evaluate the accuracy performance of methods, encompassing three pivotal metrics: Pearson correlation coefficient (PCC), root mean square error (RMSE), and average rank score (ARS). PCC . PCC (Pearson correlation coefficient) gauges the degree of correlation between the predicted values and the ground truth. At the protein level, it is calculated as: 1 [12pt]{minimal} $$ r_p = ^N ( _{ip}-_p) ( Y_{ip}- _p) }{^N ( _{ip}-_p) ^2} ^N ( Y_{ip}- _p) ^2}} $$ r p = ∑ i = 1 N Y ^ ip - μ ^ p Y ip - μ p ∑ i = 1 N Y ^ ip - μ ^ p 2 · ∑ i = 1 N Y ip - μ p 2 where [12pt]{minimal} $$_{ip}$$ Y ^ ip and [12pt]{minimal} $$Y_{ip}$$ Y ip represent the predicted and true expressions of protein p in cell i , respectively. Similarly, [12pt]{minimal} $$_p$$ μ ^ p and [12pt]{minimal} $$ _p$$ μ p denote the mean predicted and true expressions across all cells for protein p respectively, with N denoting the total number of cells. Additionally, we evaluate the correlation at the cell level, denoted as [12pt]{minimal} $$r_i$$ r i , which is calculated as: 2 [12pt]{minimal} $$ r_i = ^P ( _{ip}-_i) ( Y_{ip}- _i) }{^P ( _{ip}-_i) ^2} ^P ( Y_{ip}- _i) ^2}} $$ r i = ∑ p = 1 P Y ^ ip - μ ^ i Y ip - μ i ∑ p = 1 P Y ^ ip - μ ^ i 2 · ∑ p = 1 P Y ip - μ i 2 where [12pt]{minimal} $$_i$$ μ ^ i and [12pt]{minimal} $$ _i$$ μ i represent the mean predicted and true expressions across all proteins for cell i respectively, and P represents the total number of proteins. RMSE . RMSE (root mean square error) quantifies the absolute difference in numerical magnitude between the predicted values and the ground truth. At the protein level, we initially standardize the predicted and true expressions using Z-score transformation for comparability. RMSE for protein p is then defined as: 3 [12pt]{minimal} $$ e_p = _{i=1}^N ( _{ip}^{'}-Y_{ip}^{'}) ^2} $$ e p = 1 N ∑ i = 1 N Y ^ ip ′ - Y ip ′ 2 where [12pt]{minimal} $$_{ip}^{'}$$ Y ^ ip ′ and [12pt]{minimal} $$Y_{ip}^{'}$$ Y ip ′ represent the Z-score standardized predicted and true expressions of protein p in cell i , respectively. We also compute RMSE at the cell level after performing [12pt]{minimal} $$ _2$$ ℓ 2 normalization across proteins for each cell, which is defined as: 4 [12pt]{minimal} $$ e_i = _{p=1}^{P}( _{ip}^{ }-Y_{ip}^{ }) ^2} $$ e i = 1 P ∑ p = 1 P Y ^ ip ″ - Y ip ″ 2 where [12pt]{minimal} $$_{ip}^{ }$$ Y ^ ip ″ and [12pt]{minimal} $$Y_{ip}^{ }$$ Y ip ″ represent the [12pt]{minimal} $$ _2$$ ℓ 2 normalized predicted and true expressions of protein p in cell i , respectively. ARS . We introduce ARS (average rank score) to conduct a comprehensive evaluation of methods, incorporating the aforementioned metrics. In each experiment, we calculate the four metrics for methods (PCC and RMSE values calculated respectively at the protein and cell levels), and rank the methods accordingly based on the median values of these metrics, where a method with better performance is assigned a higher rank score value. Given the rank scores based on PCC (denoted as PCC_RS) and RMSE (denoted as RMSE_RS), we define the ARS as follows: 5 [12pt]{minimal} $$ {ARS} ( {PCC}\_ {RS},\ {RMSE}\_ {RS}) = ( {PCC}\_ {RS} + {RMSE}\_ {RS}) $$ ARS PCC _ RS , RMSE _ RS = 1 2 PCC _ RS + RMSE _ RS Specifically, based on the rank scores [12pt]{minimal} $$ {PCC}\_ {RS}_{ {protein}}$$ PCC _ RS protein and [12pt]{minimal} $$ {RMSE}\_ {RS}_{ {protein}}$$ RMSE _ RS protein at the protein level, we can obtain the ARS at the protein level as follows: 6 [12pt]{minimal} $$ {ars}_{ {protein}} = {ARS} ( {PCC}\_ {RS}_{ {protein}}, {RMSE}\_ {RS}_{ {protein}}) $$ ars protein = ARS PCC _ RS protein , RMSE _ RS protein Similarly, we can obtain the ARS at the cell level as follows: 7 [12pt]{minimal} $$ {ars}_{ {cell}} = {ARS} ( {PCC}\_ {RS}_{ {cell}}, {RMSE}\_ {RS}_{ {cell}}) $$ ars cell = ARS PCC _ RS cell , RMSE _ RS cell where [12pt]{minimal} $$ {PCC}\_ {RS}_{ {cell}}$$ PCC _ RS cell and [12pt]{minimal} $$ {RMSE}\_ {RS}_{ {cell}}$$ RMSE _ RS cell are the rank scores of methods for PCC and RMSE metrics at the cell level, respectively. A higher ARS value indicates better accuracy performance across all metrics in the experiment. Metrics for evaluating the influences of training data size variations In evaluating the influences of training data size variations on methods’ accuracy performance, running time, and memory usage, we introduce the mean to evaluate methods in terms of average accuracy or efficiency, and the increment to assess methods in terms of variability. Additionally, in assessing the influences on methods’ accuracy performance, i.e., the sensitivity of methods to training data size, we propose the average-increment composite score (AICS) as a comprehensive measure that considers both average accuracy and variability to reflect the effectiveness of methods. Means of accuracy performance . We introduce means of accuracy performance to assess the average accuracy of methods across all training data sizes. In dataset d from scenario 2, for each down-sampling rate [12pt]{minimal} $$$$ π (where [12pt]{minimal} $$$$ π ranges from 0 to 90% in increments of 10%), [12pt]{minimal} $$ {PCC}_{ {protein}}^d( )$$ PCC protein d π and [12pt]{minimal} $$ {RMSE}_{ {protein}}^d( )$$ RMSE protein d π represent the median PCC and RMSE values across five replicate experiments at the protein level, respectively. The means of accuracy performance based on PCC and RMSE are defined as: 8 [12pt]{minimal} $$ }_{ {protein}}^{d} = _{ } {PCC}_{ {protein}}^d( ) $$ PCC ¯ protein d = 1 10 ∑ π PCC protein d π 9 [12pt]{minimal} $$ }_{ {protein}}^{d} = _{ } {RMSE}_{ {protein}}^d( ) $$ RMSE ¯ protein d = 1 10 ∑ π RMSE protein d π Similarly, for the median PCC and RMSE values across five replicate experiments at the cell level, we can calculate the mean values in dataset d from scenario 2, denoted as: 10 [12pt]{minimal} $$ }_{ {cell}}^{d} = _{ } {PCC}_{ {cell}}^d( ) $$ PCC ¯ cell d = 1 10 ∑ π PCC cell d π 11 [12pt]{minimal} $$ }_{ {cell}}^{d} = _{ } {RMSE}_{ {cell}}^d( ) $$ RMSE ¯ cell d = 1 10 ∑ π RMSE cell d π A higher mean value based on PCC or a lower mean value based on RMSE indicates better performance in terms of PCC or RMSE across all training data sizes in dataset d . Means of running time and memory usage . The means of running time ( [12pt]{minimal} $$$$ T ¯ ) and memory usage ( [12pt]{minimal} $$$$ M ¯ ) evaluate efficiency across all training data rates. For each rate [12pt]{minimal} $$$$ θ (where [12pt]{minimal} $$$$ θ is equivalent to 1 minus the down-sampling rate [12pt]{minimal} $$$$ π in scenario 2, ranging from 10 to 100% in increments of 10%), [12pt]{minimal} $$T( )$$ T ( θ ) and [12pt]{minimal} $$M( )$$ M ( θ ) represent the running time and memory usage, respectively. The means of running time and memory usage are computed as: 12 [12pt]{minimal} $$ = _{ }T( ) $$ T ¯ = 1 10 ∑ θ T θ 13 [12pt]{minimal} $$ = _{ }M( ) $$ M ¯ = 1 10 ∑ θ M θ A lower mean value indicates more efficiency in terms of time or memory. Increments of accuracy performance . We introduce increments of accuracy performance to assess the variability of methods to training data size in terms of accuracy. In dataset d from scenario 2, [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d}$$ Δ PCC protein d and [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d}$$ Δ RMSE protein d represent the increments based on PCC and RMSE, respectively. They are defined as the sum of the absolute differences over all adjacent down-sampling rates: 14 [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d} = _{ '} | {PCC}_ {protein}^d( '-10) - {PCC}_ {protein}^d( ') | $$ Δ PCC protein d = ∑ π ′ PCC protein d π ′ - 10 - PCC protein d π ′ 15 [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d} = _{ '} | {RMSE}_ {protein}^d( ') - {RMSE}_ {protein}^d( '-10) | $$ Δ RMSE protein d = ∑ π ′ RMSE protein d π ′ - RMSE protein d π ′ - 10 where [12pt]{minimal} $$ '$$ π ′ and [12pt]{minimal} $$ '-10$$ π ′ - 10 are the down-sampling rates, and [12pt]{minimal} $$ ' \{10\%, 20\%, , 90\%\}$$ π ′ ∈ { 10 % , 20 % , … , 90 % } . Similarly, we calculate the increment values at the cell level as: 16 [12pt]{minimal} $$ _{ {PCC}_{ {cell}}^d} = _{ '} | {PCC}_ {cell}^d( '-10) - {PCC}_ {cell}^d( ') | $$ Δ PCC cell d = ∑ π ′ PCC cell d π ′ - 10 - PCC cell d π ′ 17 [12pt]{minimal} $$ _{ {RMSE}_{ {cell}}^d} = _{ '} | {RMSE}_ {cell}^d( ') - {RMSE}_ {cell}^d( '-10) | $$ Δ RMSE cell d = ∑ π ′ RMSE cell d π ′ - RMSE cell d π ′ - 10 A lower increment value indicates less variability of accuracy performance in terms of PCC or RMSE to training data size in dataset d . Increments of running time and memory usage . The increments of running time ( [12pt]{minimal} $$ _{ {time}}$$ Δ time ) and memory usage ( [12pt]{minimal} $$ _{ {memory}}$$ Δ memory ) measure the variability of methods to training data rate in terms of time and memory. They are defined as the sum of the absolute differences over all adjacent training data rates: 18 [12pt]{minimal} $$ _{ {time}} = _{ '} | T( ') - T( '-10) | $$ Δ time = ∑ θ ′ T θ ′ - T θ ′ - 10 19 [12pt]{minimal} $$ _{ {memory}} = _{ '} | M( ') - M( '-10) | $$ Δ memory = ∑ θ ′ M θ ′ - M θ ′ - 10 where [12pt]{minimal} $$ '$$ θ ′ and [12pt]{minimal} $$ '-10$$ θ ′ - 10 are the training data rates, and [12pt]{minimal} $$ ' \{20\%, 30\%, , 100\%\}$$ θ ′ ∈ { 20 % , 30 % , … , 100 % } . A lower increment value indicates less variability to training data size in terms of time or memory. Rank score of means of accuracy performance . To consolidate the means of accuracy performance based on PCC and RMSE, as well as the results for different datasets in scenario 2, we introduce the rank score of means of accuracy performance. Firstly, at the protein level, for the dataset d in scenario 2, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{d}$$ PCC ¯ protein d and [12pt]{minimal} $$}_{ {protein}}^{d}$$ RMSE ¯ protein d , where a method with better performance is assigned a higher rank score value, denoted as [12pt]{minimal} $${ {PCC}\_ {RS}_{ {protein}}^{ {mean}}}_d$$ PCC _ RS protein mean d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {protein}}^{ {mean}}}_d$$ RMSE _ RS protein mean d , respectively. Subsequently, we can obtain the ARS based on these rank scores. Next, we average the ARS values across all datasets in this scenario, denoted as [12pt]{minimal} $$}_{ {protein}}^{ {mean}}$$ ars ¯ protein mean , which is defined as: 20 [12pt]{minimal} $$ }_{ {protein}}^{ {mean}} = _d {ARS}( { {PCC}\_ {RS}_{ {protein}}^{ {mean}}}_d, { {RMSE}\_ {RS}_{ {protein}}^{ {mean}}}_d) $$ ars ¯ protein mean = 1 D ∑ d ARS PCC _ RS protein mean d , RMSE _ RS protein mean d where d represents the datasets used in scenario 2: CITE-PBMC-Stoeckius, CITE-CBMC-Stoeckius, and CITE-BMMC-Stuart, and [12pt]{minimal} $$| D |$$ D denotes the total number of datasets, equal to 3 here. Similarly, we calculate the mean of ARS values across all datasets in this scenario at the cell level: 21 [12pt]{minimal} $$ }_{ {cell}}^{ {mean}} = _d {ARS}( { {PCC}\_ {RS}_{ {cell}}^{ {mean}}}_d, { {RMSE}\_ {RS}_{ {cell}}^{ {mean}}}_d) $$ ars ¯ cell mean = 1 D ∑ d ARS PCC _ RS cell mean d , RMSE _ RS cell mean d where [12pt]{minimal} $${ {PCC}\_ {RS}_{ {cell}}^{ {mean}}}_d$$ PCC _ RS cell mean d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {cell}}^{ {mean}}}_d$$ RMSE _ RS cell mean d are the rank scores of [12pt]{minimal} $$}_{ {cell}}^{d}$$ PCC ¯ cell d and [12pt]{minimal} $$}_{ {cell}}^{d}$$ RMSE ¯ cell d , respectively. Finally, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{ {mean}}$$ ars ¯ protein mean and [12pt]{minimal} $$}_{ {cell}}^{ {mean}}$$ ars ¯ cell mean , where a method with higher ARS value is assigned a higher rank score value, to obtain the rank scores of means of accuracy performance, which are denoted as [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {protein}}$$ MEAN _ RS protein and [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {cell}}$$ MEAN _ RS cell , respectively. A higher rank score of means value indicates better average accuracy performance across all training data sizes and datasets in scenario 2. Rank score of increments of accuracy performance . Similarly, we introduce the rank score of increments of accuracy performance to consolidate the increments based on PCC and RMSE across different datasets in scenario 2. Firstly, at the protein level, for the dataset d in scenario 2, we rank the methods accordingly based on the [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d}$$ Δ PCC protein d and [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d}$$ Δ RMSE protein d , where a method with lower increments is assigned a higher rank score value, denoted as [12pt]{minimal} $${ {PCC}\_ {RS}_{ {protein}}^{ }}_d$$ PCC _ RS protein Δ d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {protein}}^{ }}_d$$ RMSE _ RS protein Δ d , respectively. Subsequently, we can obtain the ARS based on these rank scores. Next, we average the ARS values across all datasets in this scenario, denoted as [12pt]{minimal} $$}_{ {protein}}^{ }$$ ars ¯ protein Δ , which is defined as: 22 [12pt]{minimal} $$ }_{ {protein}}^{ } = _d {ARS}( { {PCC}\_ {RS}_{ {protein}}^{ }}_d, { {RMSE}\_ {RS}_{ {protein}}^{ }}_d) $$ ars ¯ protein Δ = 1 D ∑ d ARS PCC _ RS protein Δ d , RMSE _ RS protein Δ d Similarly, we calculate the mean of ARS values across all datasets in this scenario at the cell level: 23 [12pt]{minimal} $$ }_{ {cell}}^{ } = _d {ARS}( { {PCC}\_ {RS}_{ {cell}}^{ }}_d, { {RMSE}\_ {RS}_{ {cell}}^{ }}_d) $$ ars ¯ cell Δ = 1 D ∑ d ARS PCC _ RS cell Δ d , RMSE _ RS cell Δ d where [12pt]{minimal} $${ {PCC}\_ {RS}_{ {cell}}^{ }}_d$$ PCC _ RS cell Δ d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {cell}}^{ }}_d$$ RMSE _ RS cell Δ d are the rank scores of [12pt]{minimal} $$ _{ {PCC}_{ {cell}}^d}$$ Δ PCC cell d and [12pt]{minimal} $$ _{ {RMSE}_{ {cell}}^d}$$ Δ RMSE cell d , respectively. Finally, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{ }$$ ars ¯ protein Δ and [12pt]{minimal} $$}_{ {cell}}^{ }$$ ars ¯ cell Δ , where a method with higher ARS value is assigned a higher rank score value, to obtain the rank scores of increments of accuracy performance, which are denoted as [12pt]{minimal} $${ }\_ {RS}_{ {protein}}$$ Δ _ RS protein and [12pt]{minimal} $${ \_ {RS}}_{ {cell}}$$ Δ _ RS cell , respectively. A higher rank score of increments value indicates less variability to training data size in terms of accuracy over all datasets in scenario 2. AICS . To comprehensively assess the sensitivity of methods to training data size, we introduce AICS (average-increment composite score). This metric evaluates sensitivity by not only focusing on the variability of accuracy performance, but also considering the average accuracy performance, and is defined as the weighted sum of the rank scores of means and increments of accuracy performance: 24 [12pt]{minimal} $$ {AICS}_{ {protein}} = _{ {ai}} {MEAN}\_ {RS}_{ {protein}} + ( 1- _{ {ai}}) { \_ {RS}}_{ {protein}} $$ AICS protein = ω ai MEAN _ RS protein + 1 - ω ai Δ _ RS protein 25 [12pt]{minimal} $$ {AICS}_{ {cell}} = _{ {ai}} {MEAN}\_ {RS}_{ {cell}} + ( 1- _{ {ai}}) { \_ {RS}}_{ {cell}} $$ AICS cell = ω ai MEAN _ RS cell + 1 - ω ai Δ _ RS cell where [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {protein}}$$ MEAN _ RS protein and [12pt]{minimal} $${ \_ {RS}}_{ {protein}}$$ Δ _ RS protein are the rank scores of means and increments of accuracy performance at the protein level, respectively. [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {cell}}$$ MEAN _ RS cell and [12pt]{minimal} $${ \_ {RS}}_{ {cell}}$$ Δ _ RS cell are the rank scores of means and increments of accuracy performance at the cell level, respectively. [12pt]{minimal} $$ _{ {ai}}$$ ω ai is a weight to balance the rank scores of means and increments values, and is recommended to be greater than 0.5, with a default setting of 0.8 (see Additional file 1: Tables S5, S6 for evaluation results under different [12pt]{minimal} $$ _{ {ai}}$$ ω ai settings ranging from 0 to 1 in steps of 0.1). A higher AICS value indicates more effectiveness across all training data sizes and datasets in scenario 2. Metrics for evaluating robustness of methods The robustness composite score (RCS) is employed to assess the robustness of methods’ accuracy across experiments with technical and biological differences, which is calculated based on the ARS values from all such experiments, thereby indicating the robustness of accuracy under real-world-like conditions. RCS . We introduce RCS (robustness composite score) to evaluate the robustness of ARS values of methods across different experiments with technical and biological differences. We calculate the mean and standard deviation of ARS values of methods across all these experiments and rank them accordingly. A method with a higher mean value or lower standard deviation value is assigned a higher rank score value. At the protein level, RCS is defined as: 26 [12pt]{minimal} $$ {RCS}_{ {protein}} = _{ {ms}} {ARS}\_ {RS}_{ {protein}}^{ {mean}} + ( 1- _{ {ms}}) {ARS}\_ {RS}_{ {protein}}^{ {std}} $$ RCS protein = ω ms ARS _ RS protein mean + 1 - ω ms ARS _ RS protein std where [12pt]{minimal} $$ {ARS}\_ {RS}_{ {protein}}^{ {mean}}$$ ARS _ RS protein mean and [12pt]{minimal} $$ {ARS}\_ {RS}_{ {protein}}^{ {std}}$$ ARS _ RS protein std denote the rank scores for the mean and standard deviation at the protein level, respectively. Similarly, we can calculate RCS at the cell level: 27 [12pt]{minimal} $$ {RCS}_{ {cell}} = _{ {ms}} {ARS}\_ {RS}_{ {cell}}^{ {mean}} + ( 1- _{ {ms}}) {ARS}\_ {RS}_{ {cell}}^{ {std}} $$ RCS cell = ω ms ARS _ RS cell mean + 1 - ω ms ARS _ RS cell std where [12pt]{minimal} $$ {ARS}\_ {RS}_{ {cell}}^{ {mean}}$$ ARS _ RS cell mean and [12pt]{minimal} $$ {ARS}\_ {RS}_{ {cell}}^{ {std}}$$ ARS _ RS cell std denote the rank scores for the mean and standard deviation at the cell level, respectively. [12pt]{minimal} $$ _{ {ms}}$$ ω ms is a weight to balance the rank scores of mean and standard deviation values, and is recommended to be greater than 0.5, with a default setting of 0.8 (see Additional file 1: Tables S7, S8 for evaluation results under different [12pt]{minimal} $$ _{ {ms}}$$ ω ms settings ranging from 0 to 1 in steps of 0.1). Note that, based on the definition of RCS, the robustness in this study is a comprehensive concept that considers both the stability and competitiveness of the methods. A higher RCS value indicates more robustness across different experiments with technical and biological differences. We devise a comprehensive assessment framework to quantitatively evaluate the accuracy performance of methods, encompassing three pivotal metrics: Pearson correlation coefficient (PCC), root mean square error (RMSE), and average rank score (ARS). PCC . PCC (Pearson correlation coefficient) gauges the degree of correlation between the predicted values and the ground truth. At the protein level, it is calculated as: 1 [12pt]{minimal} $$ r_p = ^N ( _{ip}-_p) ( Y_{ip}- _p) }{^N ( _{ip}-_p) ^2} ^N ( Y_{ip}- _p) ^2}} $$ r p = ∑ i = 1 N Y ^ ip - μ ^ p Y ip - μ p ∑ i = 1 N Y ^ ip - μ ^ p 2 · ∑ i = 1 N Y ip - μ p 2 where [12pt]{minimal} $$_{ip}$$ Y ^ ip and [12pt]{minimal} $$Y_{ip}$$ Y ip represent the predicted and true expressions of protein p in cell i , respectively. Similarly, [12pt]{minimal} $$_p$$ μ ^ p and [12pt]{minimal} $$ _p$$ μ p denote the mean predicted and true expressions across all cells for protein p respectively, with N denoting the total number of cells. Additionally, we evaluate the correlation at the cell level, denoted as [12pt]{minimal} $$r_i$$ r i , which is calculated as: 2 [12pt]{minimal} $$ r_i = ^P ( _{ip}-_i) ( Y_{ip}- _i) }{^P ( _{ip}-_i) ^2} ^P ( Y_{ip}- _i) ^2}} $$ r i = ∑ p = 1 P Y ^ ip - μ ^ i Y ip - μ i ∑ p = 1 P Y ^ ip - μ ^ i 2 · ∑ p = 1 P Y ip - μ i 2 where [12pt]{minimal} $$_i$$ μ ^ i and [12pt]{minimal} $$ _i$$ μ i represent the mean predicted and true expressions across all proteins for cell i respectively, and P represents the total number of proteins. RMSE . RMSE (root mean square error) quantifies the absolute difference in numerical magnitude between the predicted values and the ground truth. At the protein level, we initially standardize the predicted and true expressions using Z-score transformation for comparability. RMSE for protein p is then defined as: 3 [12pt]{minimal} $$ e_p = _{i=1}^N ( _{ip}^{'}-Y_{ip}^{'}) ^2} $$ e p = 1 N ∑ i = 1 N Y ^ ip ′ - Y ip ′ 2 where [12pt]{minimal} $$_{ip}^{'}$$ Y ^ ip ′ and [12pt]{minimal} $$Y_{ip}^{'}$$ Y ip ′ represent the Z-score standardized predicted and true expressions of protein p in cell i , respectively. We also compute RMSE at the cell level after performing [12pt]{minimal} $$ _2$$ ℓ 2 normalization across proteins for each cell, which is defined as: 4 [12pt]{minimal} $$ e_i = _{p=1}^{P}( _{ip}^{ }-Y_{ip}^{ }) ^2} $$ e i = 1 P ∑ p = 1 P Y ^ ip ″ - Y ip ″ 2 where [12pt]{minimal} $$_{ip}^{ }$$ Y ^ ip ″ and [12pt]{minimal} $$Y_{ip}^{ }$$ Y ip ″ represent the [12pt]{minimal} $$ _2$$ ℓ 2 normalized predicted and true expressions of protein p in cell i , respectively. ARS . We introduce ARS (average rank score) to conduct a comprehensive evaluation of methods, incorporating the aforementioned metrics. In each experiment, we calculate the four metrics for methods (PCC and RMSE values calculated respectively at the protein and cell levels), and rank the methods accordingly based on the median values of these metrics, where a method with better performance is assigned a higher rank score value. Given the rank scores based on PCC (denoted as PCC_RS) and RMSE (denoted as RMSE_RS), we define the ARS as follows: 5 [12pt]{minimal} $$ {ARS} ( {PCC}\_ {RS},\ {RMSE}\_ {RS}) = ( {PCC}\_ {RS} + {RMSE}\_ {RS}) $$ ARS PCC _ RS , RMSE _ RS = 1 2 PCC _ RS + RMSE _ RS Specifically, based on the rank scores [12pt]{minimal} $$ {PCC}\_ {RS}_{ {protein}}$$ PCC _ RS protein and [12pt]{minimal} $$ {RMSE}\_ {RS}_{ {protein}}$$ RMSE _ RS protein at the protein level, we can obtain the ARS at the protein level as follows: 6 [12pt]{minimal} $$ {ars}_{ {protein}} = {ARS} ( {PCC}\_ {RS}_{ {protein}}, {RMSE}\_ {RS}_{ {protein}}) $$ ars protein = ARS PCC _ RS protein , RMSE _ RS protein Similarly, we can obtain the ARS at the cell level as follows: 7 [12pt]{minimal} $$ {ars}_{ {cell}} = {ARS} ( {PCC}\_ {RS}_{ {cell}}, {RMSE}\_ {RS}_{ {cell}}) $$ ars cell = ARS PCC _ RS cell , RMSE _ RS cell where [12pt]{minimal} $$ {PCC}\_ {RS}_{ {cell}}$$ PCC _ RS cell and [12pt]{minimal} $$ {RMSE}\_ {RS}_{ {cell}}$$ RMSE _ RS cell are the rank scores of methods for PCC and RMSE metrics at the cell level, respectively. A higher ARS value indicates better accuracy performance across all metrics in the experiment. In evaluating the influences of training data size variations on methods’ accuracy performance, running time, and memory usage, we introduce the mean to evaluate methods in terms of average accuracy or efficiency, and the increment to assess methods in terms of variability. Additionally, in assessing the influences on methods’ accuracy performance, i.e., the sensitivity of methods to training data size, we propose the average-increment composite score (AICS) as a comprehensive measure that considers both average accuracy and variability to reflect the effectiveness of methods. Means of accuracy performance . We introduce means of accuracy performance to assess the average accuracy of methods across all training data sizes. In dataset d from scenario 2, for each down-sampling rate [12pt]{minimal} $$$$ π (where [12pt]{minimal} $$$$ π ranges from 0 to 90% in increments of 10%), [12pt]{minimal} $$ {PCC}_{ {protein}}^d( )$$ PCC protein d π and [12pt]{minimal} $$ {RMSE}_{ {protein}}^d( )$$ RMSE protein d π represent the median PCC and RMSE values across five replicate experiments at the protein level, respectively. The means of accuracy performance based on PCC and RMSE are defined as: 8 [12pt]{minimal} $$ }_{ {protein}}^{d} = _{ } {PCC}_{ {protein}}^d( ) $$ PCC ¯ protein d = 1 10 ∑ π PCC protein d π 9 [12pt]{minimal} $$ }_{ {protein}}^{d} = _{ } {RMSE}_{ {protein}}^d( ) $$ RMSE ¯ protein d = 1 10 ∑ π RMSE protein d π Similarly, for the median PCC and RMSE values across five replicate experiments at the cell level, we can calculate the mean values in dataset d from scenario 2, denoted as: 10 [12pt]{minimal} $$ }_{ {cell}}^{d} = _{ } {PCC}_{ {cell}}^d( ) $$ PCC ¯ cell d = 1 10 ∑ π PCC cell d π 11 [12pt]{minimal} $$ }_{ {cell}}^{d} = _{ } {RMSE}_{ {cell}}^d( ) $$ RMSE ¯ cell d = 1 10 ∑ π RMSE cell d π A higher mean value based on PCC or a lower mean value based on RMSE indicates better performance in terms of PCC or RMSE across all training data sizes in dataset d . Means of running time and memory usage . The means of running time ( [12pt]{minimal} $$$$ T ¯ ) and memory usage ( [12pt]{minimal} $$$$ M ¯ ) evaluate efficiency across all training data rates. For each rate [12pt]{minimal} $$$$ θ (where [12pt]{minimal} $$$$ θ is equivalent to 1 minus the down-sampling rate [12pt]{minimal} $$$$ π in scenario 2, ranging from 10 to 100% in increments of 10%), [12pt]{minimal} $$T( )$$ T ( θ ) and [12pt]{minimal} $$M( )$$ M ( θ ) represent the running time and memory usage, respectively. The means of running time and memory usage are computed as: 12 [12pt]{minimal} $$ = _{ }T( ) $$ T ¯ = 1 10 ∑ θ T θ 13 [12pt]{minimal} $$ = _{ }M( ) $$ M ¯ = 1 10 ∑ θ M θ A lower mean value indicates more efficiency in terms of time or memory. Increments of accuracy performance . We introduce increments of accuracy performance to assess the variability of methods to training data size in terms of accuracy. In dataset d from scenario 2, [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d}$$ Δ PCC protein d and [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d}$$ Δ RMSE protein d represent the increments based on PCC and RMSE, respectively. They are defined as the sum of the absolute differences over all adjacent down-sampling rates: 14 [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d} = _{ '} | {PCC}_ {protein}^d( '-10) - {PCC}_ {protein}^d( ') | $$ Δ PCC protein d = ∑ π ′ PCC protein d π ′ - 10 - PCC protein d π ′ 15 [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d} = _{ '} | {RMSE}_ {protein}^d( ') - {RMSE}_ {protein}^d( '-10) | $$ Δ RMSE protein d = ∑ π ′ RMSE protein d π ′ - RMSE protein d π ′ - 10 where [12pt]{minimal} $$ '$$ π ′ and [12pt]{minimal} $$ '-10$$ π ′ - 10 are the down-sampling rates, and [12pt]{minimal} $$ ' \{10\%, 20\%, , 90\%\}$$ π ′ ∈ { 10 % , 20 % , … , 90 % } . Similarly, we calculate the increment values at the cell level as: 16 [12pt]{minimal} $$ _{ {PCC}_{ {cell}}^d} = _{ '} | {PCC}_ {cell}^d( '-10) - {PCC}_ {cell}^d( ') | $$ Δ PCC cell d = ∑ π ′ PCC cell d π ′ - 10 - PCC cell d π ′ 17 [12pt]{minimal} $$ _{ {RMSE}_{ {cell}}^d} = _{ '} | {RMSE}_ {cell}^d( ') - {RMSE}_ {cell}^d( '-10) | $$ Δ RMSE cell d = ∑ π ′ RMSE cell d π ′ - RMSE cell d π ′ - 10 A lower increment value indicates less variability of accuracy performance in terms of PCC or RMSE to training data size in dataset d . Increments of running time and memory usage . The increments of running time ( [12pt]{minimal} $$ _{ {time}}$$ Δ time ) and memory usage ( [12pt]{minimal} $$ _{ {memory}}$$ Δ memory ) measure the variability of methods to training data rate in terms of time and memory. They are defined as the sum of the absolute differences over all adjacent training data rates: 18 [12pt]{minimal} $$ _{ {time}} = _{ '} | T( ') - T( '-10) | $$ Δ time = ∑ θ ′ T θ ′ - T θ ′ - 10 19 [12pt]{minimal} $$ _{ {memory}} = _{ '} | M( ') - M( '-10) | $$ Δ memory = ∑ θ ′ M θ ′ - M θ ′ - 10 where [12pt]{minimal} $$ '$$ θ ′ and [12pt]{minimal} $$ '-10$$ θ ′ - 10 are the training data rates, and [12pt]{minimal} $$ ' \{20\%, 30\%, , 100\%\}$$ θ ′ ∈ { 20 % , 30 % , … , 100 % } . A lower increment value indicates less variability to training data size in terms of time or memory. Rank score of means of accuracy performance . To consolidate the means of accuracy performance based on PCC and RMSE, as well as the results for different datasets in scenario 2, we introduce the rank score of means of accuracy performance. Firstly, at the protein level, for the dataset d in scenario 2, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{d}$$ PCC ¯ protein d and [12pt]{minimal} $$}_{ {protein}}^{d}$$ RMSE ¯ protein d , where a method with better performance is assigned a higher rank score value, denoted as [12pt]{minimal} $${ {PCC}\_ {RS}_{ {protein}}^{ {mean}}}_d$$ PCC _ RS protein mean d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {protein}}^{ {mean}}}_d$$ RMSE _ RS protein mean d , respectively. Subsequently, we can obtain the ARS based on these rank scores. Next, we average the ARS values across all datasets in this scenario, denoted as [12pt]{minimal} $$}_{ {protein}}^{ {mean}}$$ ars ¯ protein mean , which is defined as: 20 [12pt]{minimal} $$ }_{ {protein}}^{ {mean}} = _d {ARS}( { {PCC}\_ {RS}_{ {protein}}^{ {mean}}}_d, { {RMSE}\_ {RS}_{ {protein}}^{ {mean}}}_d) $$ ars ¯ protein mean = 1 D ∑ d ARS PCC _ RS protein mean d , RMSE _ RS protein mean d where d represents the datasets used in scenario 2: CITE-PBMC-Stoeckius, CITE-CBMC-Stoeckius, and CITE-BMMC-Stuart, and [12pt]{minimal} $$| D |$$ D denotes the total number of datasets, equal to 3 here. Similarly, we calculate the mean of ARS values across all datasets in this scenario at the cell level: 21 [12pt]{minimal} $$ }_{ {cell}}^{ {mean}} = _d {ARS}( { {PCC}\_ {RS}_{ {cell}}^{ {mean}}}_d, { {RMSE}\_ {RS}_{ {cell}}^{ {mean}}}_d) $$ ars ¯ cell mean = 1 D ∑ d ARS PCC _ RS cell mean d , RMSE _ RS cell mean d where [12pt]{minimal} $${ {PCC}\_ {RS}_{ {cell}}^{ {mean}}}_d$$ PCC _ RS cell mean d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {cell}}^{ {mean}}}_d$$ RMSE _ RS cell mean d are the rank scores of [12pt]{minimal} $$}_{ {cell}}^{d}$$ PCC ¯ cell d and [12pt]{minimal} $$}_{ {cell}}^{d}$$ RMSE ¯ cell d , respectively. Finally, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{ {mean}}$$ ars ¯ protein mean and [12pt]{minimal} $$}_{ {cell}}^{ {mean}}$$ ars ¯ cell mean , where a method with higher ARS value is assigned a higher rank score value, to obtain the rank scores of means of accuracy performance, which are denoted as [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {protein}}$$ MEAN _ RS protein and [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {cell}}$$ MEAN _ RS cell , respectively. A higher rank score of means value indicates better average accuracy performance across all training data sizes and datasets in scenario 2. Rank score of increments of accuracy performance . Similarly, we introduce the rank score of increments of accuracy performance to consolidate the increments based on PCC and RMSE across different datasets in scenario 2. Firstly, at the protein level, for the dataset d in scenario 2, we rank the methods accordingly based on the [12pt]{minimal} $$ _{ {PCC}_{ {protein}}^d}$$ Δ PCC protein d and [12pt]{minimal} $$ _{ {RMSE}_{ {protein}}^d}$$ Δ RMSE protein d , where a method with lower increments is assigned a higher rank score value, denoted as [12pt]{minimal} $${ {PCC}\_ {RS}_{ {protein}}^{ }}_d$$ PCC _ RS protein Δ d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {protein}}^{ }}_d$$ RMSE _ RS protein Δ d , respectively. Subsequently, we can obtain the ARS based on these rank scores. Next, we average the ARS values across all datasets in this scenario, denoted as [12pt]{minimal} $$}_{ {protein}}^{ }$$ ars ¯ protein Δ , which is defined as: 22 [12pt]{minimal} $$ }_{ {protein}}^{ } = _d {ARS}( { {PCC}\_ {RS}_{ {protein}}^{ }}_d, { {RMSE}\_ {RS}_{ {protein}}^{ }}_d) $$ ars ¯ protein Δ = 1 D ∑ d ARS PCC _ RS protein Δ d , RMSE _ RS protein Δ d Similarly, we calculate the mean of ARS values across all datasets in this scenario at the cell level: 23 [12pt]{minimal} $$ }_{ {cell}}^{ } = _d {ARS}( { {PCC}\_ {RS}_{ {cell}}^{ }}_d, { {RMSE}\_ {RS}_{ {cell}}^{ }}_d) $$ ars ¯ cell Δ = 1 D ∑ d ARS PCC _ RS cell Δ d , RMSE _ RS cell Δ d where [12pt]{minimal} $${ {PCC}\_ {RS}_{ {cell}}^{ }}_d$$ PCC _ RS cell Δ d and [12pt]{minimal} $${ {RMSE}\_ {RS}_{ {cell}}^{ }}_d$$ RMSE _ RS cell Δ d are the rank scores of [12pt]{minimal} $$ _{ {PCC}_{ {cell}}^d}$$ Δ PCC cell d and [12pt]{minimal} $$ _{ {RMSE}_{ {cell}}^d}$$ Δ RMSE cell d , respectively. Finally, we rank the methods accordingly based on the [12pt]{minimal} $$}_{ {protein}}^{ }$$ ars ¯ protein Δ and [12pt]{minimal} $$}_{ {cell}}^{ }$$ ars ¯ cell Δ , where a method with higher ARS value is assigned a higher rank score value, to obtain the rank scores of increments of accuracy performance, which are denoted as [12pt]{minimal} $${ }\_ {RS}_{ {protein}}$$ Δ _ RS protein and [12pt]{minimal} $${ \_ {RS}}_{ {cell}}$$ Δ _ RS cell , respectively. A higher rank score of increments value indicates less variability to training data size in terms of accuracy over all datasets in scenario 2. AICS . To comprehensively assess the sensitivity of methods to training data size, we introduce AICS (average-increment composite score). This metric evaluates sensitivity by not only focusing on the variability of accuracy performance, but also considering the average accuracy performance, and is defined as the weighted sum of the rank scores of means and increments of accuracy performance: 24 [12pt]{minimal} $$ {AICS}_{ {protein}} = _{ {ai}} {MEAN}\_ {RS}_{ {protein}} + ( 1- _{ {ai}}) { \_ {RS}}_{ {protein}} $$ AICS protein = ω ai MEAN _ RS protein + 1 - ω ai Δ _ RS protein 25 [12pt]{minimal} $$ {AICS}_{ {cell}} = _{ {ai}} {MEAN}\_ {RS}_{ {cell}} + ( 1- _{ {ai}}) { \_ {RS}}_{ {cell}} $$ AICS cell = ω ai MEAN _ RS cell + 1 - ω ai Δ _ RS cell where [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {protein}}$$ MEAN _ RS protein and [12pt]{minimal} $${ \_ {RS}}_{ {protein}}$$ Δ _ RS protein are the rank scores of means and increments of accuracy performance at the protein level, respectively. [12pt]{minimal} $$ {MEAN}\_ {RS}_{ {cell}}$$ MEAN _ RS cell and [12pt]{minimal} $${ \_ {RS}}_{ {cell}}$$ Δ _ RS cell are the rank scores of means and increments of accuracy performance at the cell level, respectively. [12pt]{minimal} $$ _{ {ai}}$$ ω ai is a weight to balance the rank scores of means and increments values, and is recommended to be greater than 0.5, with a default setting of 0.8 (see Additional file 1: Tables S5, S6 for evaluation results under different [12pt]{minimal} $$ _{ {ai}}$$ ω ai settings ranging from 0 to 1 in steps of 0.1). A higher AICS value indicates more effectiveness across all training data sizes and datasets in scenario 2. The robustness composite score (RCS) is employed to assess the robustness of methods’ accuracy across experiments with technical and biological differences, which is calculated based on the ARS values from all such experiments, thereby indicating the robustness of accuracy under real-world-like conditions. RCS . We introduce RCS (robustness composite score) to evaluate the robustness of ARS values of methods across different experiments with technical and biological differences. We calculate the mean and standard deviation of ARS values of methods across all these experiments and rank them accordingly. A method with a higher mean value or lower standard deviation value is assigned a higher rank score value. At the protein level, RCS is defined as: 26 [12pt]{minimal} $$ {RCS}_{ {protein}} = _{ {ms}} {ARS}\_ {RS}_{ {protein}}^{ {mean}} + ( 1- _{ {ms}}) {ARS}\_ {RS}_{ {protein}}^{ {std}} $$ RCS protein = ω ms ARS _ RS protein mean + 1 - ω ms ARS _ RS protein std where [12pt]{minimal} $$ {ARS}\_ {RS}_{ {protein}}^{ {mean}}$$ ARS _ RS protein mean and [12pt]{minimal} $$ {ARS}\_ {RS}_{ {protein}}^{ {std}}$$ ARS _ RS protein std denote the rank scores for the mean and standard deviation at the protein level, respectively. Similarly, we can calculate RCS at the cell level: 27 [12pt]{minimal} $$ {RCS}_{ {cell}} = _{ {ms}} {ARS}\_ {RS}_{ {cell}}^{ {mean}} + ( 1- _{ {ms}}) {ARS}\_ {RS}_{ {cell}}^{ {std}} $$ RCS cell = ω ms ARS _ RS cell mean + 1 - ω ms ARS _ RS cell std where [12pt]{minimal} $$ {ARS}\_ {RS}_{ {cell}}^{ {mean}}$$ ARS _ RS cell mean and [12pt]{minimal} $$ {ARS}\_ {RS}_{ {cell}}^{ {std}}$$ ARS _ RS cell std denote the rank scores for the mean and standard deviation at the cell level, respectively. [12pt]{minimal} $$ _{ {ms}}$$ ω ms is a weight to balance the rank scores of mean and standard deviation values, and is recommended to be greater than 0.5, with a default setting of 0.8 (see Additional file 1: Tables S7, S8 for evaluation results under different [12pt]{minimal} $$ _{ {ms}}$$ ω ms settings ranging from 0 to 1 in steps of 0.1). Note that, based on the definition of RCS, the robustness in this study is a comprehensive concept that considers both the stability and competitiveness of the methods. A higher RCS value indicates more robustness across different experiments with technical and biological differences. Additional file 1: Supplementary tables S1-S11. Additional file 2: Supplementary notes 1-2 and Supplementary figures S1-S22.
Why
f438a6df-b289-441c-9a7b-d78181ae0dca
8233287
Pharmacology[mh]
Public health implications of multidrug-resistant and methicillin-resistant
085bb217-f878-4924-afcf-110ec97a5ca2
11802730
Microbiology[mh]
Antimicrobial resistance (AMR) is currently one of the major global issues.  Millions of deaths, permanent disabilities, and increased medical costs are the results. It also endangers food safety and causes deaths among humans and animals – . The misuse and overuse of antibiotics, whether in human or veterinary medicine, are directly related to the emergence of resistant bacterial strains . Staphylococcus aureus , a superbug, is accountable for a wide spectrum of hospital and community-acquired infections . Additionally, it is a foremost contributor to foodborne illnesses, particularly foodborne intoxications . Over time, the majority of S. aureus strains have developed resistance to β-lactam antibiotics, leading to the emergence of MRSA . One of the key factors contributing to β-lactam resistance in MRSA is the presence of a highly transmissible mobile genetic element-staphylococcal cassette chromosome mec (SCCmec) which harbors mecA and its analog mecC genes encoding penicillin-binding proteins PBP-2 A and PBP-2 C, respectively . These genes provide resistance to most β-lactams, by interfering with the drug’s ability to bind to cell wall proteins . Indeed, MRSA has become a focal point of public health concern and a potentially lethal pathogen . Beyond β-lactams, MRSA exhibits resistance to a range of other antibiotics, including macrolides, tetracycline, aminoglycosides, chloramphenicol, lincosamides, and fluoroquinolones , . This multi-drug resistance complicates MRSA treatment and allows the bacteria to survive in environments where antibiotic selection pressure exists , . The pathogenicity of S. aureus primarily stems from the production of potent super-antigenic toxins, such as toxic shock syndrome toxin-1 (TSST-1), which incriminated in toxic shock syndrome (TSS) in humans . Toxic shock syndrome is a rare, life-threatening, multisystemic disease characterized by rapid onset of fever, erythematous skin rash, hypotension, hemodynamic shock, multiorgan failure, and death – . Owing to the genomic plasticity of bacteria, several virulent and antimicrobial-resistant strains of S. aureus have evolved primarily due to horizontal gene transfer and insurgence of chromosomal point mutations , . Understanding the genetic characteristics and virulence factors of these strains is crucial for effective infection control and management. MRSA has the ability to colonize and infect a wide range of hosts, existing in distinct ecological niches , . This bacterium can contaminate animal-derived food products and seafood throughout the production chain, from farm to table – . Additionally, MRSA is frequently found in human sources globally, emphasizing its zoonotic potential and the role of the food production chain as a conduit for transmission of these resistant strains between humans and animals – . Bivalve mollusks, including oysters, have a remarkable filtering capacity with low selectivity, leading them to accumulate biological contaminants such as S. aureus , – . This accumulation represents a significant risk to consumers, especially when consuming raw or undercooked oysters . Egypt has been identified as a hyperendemic Mediterranean country for MRSA, exhibiting heightened levels of MRSA among S. aureus clinical isolates , . Given the significant concern regarding zoonotic MRSA transmission, it is crucial to assess the occurrence of S. aureus , MRSA, and MDR-MRSA in oysters sold in Egypt. Additionally, investigating the tsst-1 virulence gene profile is essential for a comprehensive understanding of the potential risks associated with this pathogen. Ethical approval The protocol was reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of the Faculty of Veterinary Medicine, Cairo University, Egypt (Vet CU18042024932). Samples collection and processing A total of 330 fresh oysters were randomly collected from different retail fish markets in Cairo and Giza governorates over one year, from December 2021 to December 2022. The samples were immediately transported to the laboratory under sterile refrigerated conditions and divided into 33 pools containing ten oyster samples. Each pool corresponds to a distinct market, emphasizing one per market. The external valves of oysters were thoroughly washed with sterile water and aseptically opened. The digestive tissues were dissected, cleaned, and finely chopped to a paste-like consistency to ensure the uniformity of the starting material . Isolation and identification of S. aureus Aliquots of 2 g from each pool were enriched overnight in 5 ml Brain heart infusion broth (Oxoid, Hampshire, UK) before plating on mannitol salt agar medium (Oxoid, Hampshire, UK) and incubated aerobically at 37 °C for 24 h. Suspected S. aureus colonies were sub-cultured to obtain a pure culture and were examined for colony morphology, Gram staining, standard biochemical tests, and coagulase test according to Quinn et al. and Mahon and Lehman . To prevent bacterial contamination and spread in the laboratory, several stringent measures were implemented, including the use of personal protective equipment (PPE), adherence to strict hand hygiene protocols, regular disinfection of work surfaces and equipment, and proper disposal of biological wastes. Molecular identification of Staphylococcus Genus and the species S. aureus Genomic DNA was extracted from isolates using the boiling method . All S. aureus isolates were molecularly confirmed by PCR with Staphylococcus 16S rRNA primers to confirm the Staphylococcus genus according to Zhang et al. , and with the nuc gene to confirm the species S. aureus according to McClure et al. . The reaction mixtures were carried out on a total volume of 25 µl, containing 3 µl of template DNA from each isolate, 12.5 µl of Emerald Amp MAX PCR master mix (Takara, Japan), 0.5 µl of each primer (10 pmol/µl; Metabion, Germany) and completed up to 25 µl by PCR-grade water. The PCR amplicons were electrophoresed on a 1.5% agarose gel and visualized under ultraviolet light. The specific oligonucleotide primers set, and amplification conditions are displayed in Table . Antimicrobial susceptibility testing (AST) Antimicrobial susceptibility patterns of all confirmed S. aureus isolates were determined using the Kirby-Bauer disk diffusion method on Mueller Hinton Agar (MHA) (HiMedia) following Clinical and Laboratory Standards Institute (CLSI) guidelines . Fourteen antibiotics commonly prescribed in human and animals , (Oxoid, Hampshire, UK) were used, representing nine different antimicrobial classes: β-lactams (Penicillins: ampicillin 10 µg, methicillin 5 µg, Cephalosporins: cefoxitin 30 µg), Aminoglycosides (amikacin 30 µg and gentamycin 10 µg), Fluorquinolones (ciprofloxacin 5 µg and levofloxacin 5 µg), Macrolides (erythromycin 30 µg and azithromycin 15 µg), Tetracycline (doxycycline 30 µg), Sulfonamides (trimethoprim/sulfamethoxazole 1.25 µg/23.75 µg), Phenicols (chloramphenicol 30 µg), lincosamide (clindamycin 2 µg), Ansamycins (rifampicin 5 µg). Staphylococcus aureus isolates that test resistant to cefoxitin should be reported as MRSA according to CLSI . Additionally, MDR isolates were defined as those not susceptible to at least one antimicrobial agent in three or more antimicrobial classes . The MARI is calculated by dividing the number of antibiotics to which the organism is resistant by the total number of antibiotics tested .The MARI value greater than 0.2 indicates that the isolate originated from a source where antibiotics were used extensively and/or in large quantities, while a MARI of 1.0 signifies that the isolate is resistant to all antibiotics tested . Molecular confirmation of MRSA Isolates exhibiting phenotypic resistance to cefoxitin underwent additional confirmation through Multiplex PCR detection of the methicillin resistance-encoding genes, mecA , and mecC , following the protocols outlined by Doğan et al. . The reaction mixtures were carried out on a total volume of 25 µl, containing 5 µl of template DNA from each isolate, 12.5 µl of Emerald Amp MAX PCR master mix (Takara, Japan), 0.5 µl of each primer (10 pmol/µl; Metabion, Germany) and completed up to 25 µl by PCR-grade water. The PCR products were electrophoresed on a 1.5% agarose gel and visualized under ultraviolet light. The specific oligonucleotide primers set, and amplification conditions are presented in Table . A negative control was included, containing all components of the PCR mixture but with water instead of template DNA. The positive control was the S. aureus strain ATCC 700,699. Molecular detection of S. aureus virulence gene Phenotypic MDR-MRSA isolates were exposed to uniplex PCR, which targets the toxic shock syndrome toxin gene ( tsst-1 ), according to Havaei et al. . The PCR mixtures were carried out on a total volume of 25 µl, containing 4 µl of template DNA from each isolate, 12.5 µl of Emerald Amp MAX PCR master mix (Takara, Japan), 0.5 µl of each primer (10 pmol/µl; Metabion, Germany) and completed up to 25 µl by PCR-grade water. The PCR products were electrophoresed on a 1.5% agarose gel and visualized under ultraviolet light. The specific oligonucleotide primers set, and amplification conditions are demonstrated in Table . Statistical analysis Statistical analysis was conducted in R (version 4.2.2, R Foundation for Statistical Computing). The isolates were clustered using the pheatmap library (version 1.0.12) . The protocol was reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of the Faculty of Veterinary Medicine, Cairo University, Egypt (Vet CU18042024932). A total of 330 fresh oysters were randomly collected from different retail fish markets in Cairo and Giza governorates over one year, from December 2021 to December 2022. The samples were immediately transported to the laboratory under sterile refrigerated conditions and divided into 33 pools containing ten oyster samples. Each pool corresponds to a distinct market, emphasizing one per market. The external valves of oysters were thoroughly washed with sterile water and aseptically opened. The digestive tissues were dissected, cleaned, and finely chopped to a paste-like consistency to ensure the uniformity of the starting material . S. aureus Aliquots of 2 g from each pool were enriched overnight in 5 ml Brain heart infusion broth (Oxoid, Hampshire, UK) before plating on mannitol salt agar medium (Oxoid, Hampshire, UK) and incubated aerobically at 37 °C for 24 h. Suspected S. aureus colonies were sub-cultured to obtain a pure culture and were examined for colony morphology, Gram staining, standard biochemical tests, and coagulase test according to Quinn et al. and Mahon and Lehman . To prevent bacterial contamination and spread in the laboratory, several stringent measures were implemented, including the use of personal protective equipment (PPE), adherence to strict hand hygiene protocols, regular disinfection of work surfaces and equipment, and proper disposal of biological wastes. Staphylococcus Genus and the species S. aureus Genomic DNA was extracted from isolates using the boiling method . All S. aureus isolates were molecularly confirmed by PCR with Staphylococcus 16S rRNA primers to confirm the Staphylococcus genus according to Zhang et al. , and with the nuc gene to confirm the species S. aureus according to McClure et al. . The reaction mixtures were carried out on a total volume of 25 µl, containing 3 µl of template DNA from each isolate, 12.5 µl of Emerald Amp MAX PCR master mix (Takara, Japan), 0.5 µl of each primer (10 pmol/µl; Metabion, Germany) and completed up to 25 µl by PCR-grade water. The PCR amplicons were electrophoresed on a 1.5% agarose gel and visualized under ultraviolet light. The specific oligonucleotide primers set, and amplification conditions are displayed in Table . Antimicrobial susceptibility patterns of all confirmed S. aureus isolates were determined using the Kirby-Bauer disk diffusion method on Mueller Hinton Agar (MHA) (HiMedia) following Clinical and Laboratory Standards Institute (CLSI) guidelines . Fourteen antibiotics commonly prescribed in human and animals , (Oxoid, Hampshire, UK) were used, representing nine different antimicrobial classes: β-lactams (Penicillins: ampicillin 10 µg, methicillin 5 µg, Cephalosporins: cefoxitin 30 µg), Aminoglycosides (amikacin 30 µg and gentamycin 10 µg), Fluorquinolones (ciprofloxacin 5 µg and levofloxacin 5 µg), Macrolides (erythromycin 30 µg and azithromycin 15 µg), Tetracycline (doxycycline 30 µg), Sulfonamides (trimethoprim/sulfamethoxazole 1.25 µg/23.75 µg), Phenicols (chloramphenicol 30 µg), lincosamide (clindamycin 2 µg), Ansamycins (rifampicin 5 µg). Staphylococcus aureus isolates that test resistant to cefoxitin should be reported as MRSA according to CLSI . Additionally, MDR isolates were defined as those not susceptible to at least one antimicrobial agent in three or more antimicrobial classes . The MARI is calculated by dividing the number of antibiotics to which the organism is resistant by the total number of antibiotics tested .The MARI value greater than 0.2 indicates that the isolate originated from a source where antibiotics were used extensively and/or in large quantities, while a MARI of 1.0 signifies that the isolate is resistant to all antibiotics tested . Isolates exhibiting phenotypic resistance to cefoxitin underwent additional confirmation through Multiplex PCR detection of the methicillin resistance-encoding genes, mecA , and mecC , following the protocols outlined by Doğan et al. . The reaction mixtures were carried out on a total volume of 25 µl, containing 5 µl of template DNA from each isolate, 12.5 µl of Emerald Amp MAX PCR master mix (Takara, Japan), 0.5 µl of each primer (10 pmol/µl; Metabion, Germany) and completed up to 25 µl by PCR-grade water. The PCR products were electrophoresed on a 1.5% agarose gel and visualized under ultraviolet light. The specific oligonucleotide primers set, and amplification conditions are presented in Table . A negative control was included, containing all components of the PCR mixture but with water instead of template DNA. The positive control was the S. aureus strain ATCC 700,699. S. aureus virulence gene Phenotypic MDR-MRSA isolates were exposed to uniplex PCR, which targets the toxic shock syndrome toxin gene ( tsst-1 ), according to Havaei et al. . The PCR mixtures were carried out on a total volume of 25 µl, containing 4 µl of template DNA from each isolate, 12.5 µl of Emerald Amp MAX PCR master mix (Takara, Japan), 0.5 µl of each primer (10 pmol/µl; Metabion, Germany) and completed up to 25 µl by PCR-grade water. The PCR products were electrophoresed on a 1.5% agarose gel and visualized under ultraviolet light. The specific oligonucleotide primers set, and amplification conditions are demonstrated in Table . Statistical analysis was conducted in R (version 4.2.2, R Foundation for Statistical Computing). The isolates were clustered using the pheatmap library (version 1.0.12) . Prevalence of S. aureus among the examined oyster samples Thirteen confirmed S. aureus isolates were detected in 33 fresh oyster pooled samples collected from various retail fish markets in Egypt, resulting in a 39.4% prevalence rate. All phenotypic S. aureus isolates tested positive for the 16 S rRNA and nuc genes. Antimicrobial susceptibility profile of the identified S. aureus isolates Our results showed that all the isolates exhibited resistance to more than one of the examined antibiotics, with 100% of isolates showing resistance against methicillin (MET). Whereas, ciprofloxacin (CIP) demonstrated the highest effectiveness, with a 100% susceptibility rate, followed by levofloxacin (LE) and rifampicin (RA), which showed 92.3% susceptibility, with other susceptibility patterns shown in Fig. . Furthermore, the results of the study revealed that 10 (77%) of the13 isolates were classified as MDR with MARI exceeding 0.2, whereas 3 isolates (23.1%) exhibited a MARI below 0.2. Notably, none of the isolates showed an index of 1.0, as detailed in (Fig. & Supplementary Table 1). Notably, MRSA isolates were recovered from 6 (46.2%) of the 13 confirmed S. aureus isolates based on phenotypic resistance to cefoxitin. All six MRSA isolates exhibited MDR, displaying resistance to both methicillin and cefoxitin. However, one MRSA isolate showed unexpected susceptibility to ampicillin. Among these, four MRSA isolates were not susceptible to clindamycin, and two isolates showed non-susceptibility to trimethoprim-sulfamethoxazole, as shown in Fig. and Supplementary Table 1. Occurrence of S. aureus resistance and virulence genes The PCR results for methicillin resistance-encoding genes, mecA and mecC , and the toxic shock syndrome toxin ( tsst-1 ) in phenotypic MDR-MRSA revealed that 66.7% (4 out of 6) of the isolates harbored the mecA gene, while 16.7% (1 out of 6) carried the mecC gene. Two MRSA isolates lacked both mecA and mecC genes. Additionally, the tsst-1 virulence gene was identified in one isolate (16.7%), as shown in Fig. and Supplementary Table 2. Cluster analysis of the antimicrobial sensitivity results (phenotypic and genotypic) with virulence gene carriage in MDR-MRSA isolates ( n = 6) The heatmap (Fig. ) categorizes six MDR-MRSA isolates into two main groups (G1 and G2) based on their susceptibility to 14 tested antibiotics, two genes encoding methicillin-resistance, and one virulence gene. The top of the heat map (C1 and C2) shows the phenotypic resistance profile with methicillin resistance and virulence genes tested. Within the clustering, two MDR-MRSA isolates (SA.8 and SA.16) had nearly identical resistance profiles, with SA.8 carrying mecA and mecC genes and SA.16 carrying mecA alone. Both isolates were negative for tsst-1 virulence gene. S. aureus among the examined oyster samples Thirteen confirmed S. aureus isolates were detected in 33 fresh oyster pooled samples collected from various retail fish markets in Egypt, resulting in a 39.4% prevalence rate. All phenotypic S. aureus isolates tested positive for the 16 S rRNA and nuc genes. S. aureus isolates Our results showed that all the isolates exhibited resistance to more than one of the examined antibiotics, with 100% of isolates showing resistance against methicillin (MET). Whereas, ciprofloxacin (CIP) demonstrated the highest effectiveness, with a 100% susceptibility rate, followed by levofloxacin (LE) and rifampicin (RA), which showed 92.3% susceptibility, with other susceptibility patterns shown in Fig. . Furthermore, the results of the study revealed that 10 (77%) of the13 isolates were classified as MDR with MARI exceeding 0.2, whereas 3 isolates (23.1%) exhibited a MARI below 0.2. Notably, none of the isolates showed an index of 1.0, as detailed in (Fig. & Supplementary Table 1). Notably, MRSA isolates were recovered from 6 (46.2%) of the 13 confirmed S. aureus isolates based on phenotypic resistance to cefoxitin. All six MRSA isolates exhibited MDR, displaying resistance to both methicillin and cefoxitin. However, one MRSA isolate showed unexpected susceptibility to ampicillin. Among these, four MRSA isolates were not susceptible to clindamycin, and two isolates showed non-susceptibility to trimethoprim-sulfamethoxazole, as shown in Fig. and Supplementary Table 1. S. aureus resistance and virulence genes The PCR results for methicillin resistance-encoding genes, mecA and mecC , and the toxic shock syndrome toxin ( tsst-1 ) in phenotypic MDR-MRSA revealed that 66.7% (4 out of 6) of the isolates harbored the mecA gene, while 16.7% (1 out of 6) carried the mecC gene. Two MRSA isolates lacked both mecA and mecC genes. Additionally, the tsst-1 virulence gene was identified in one isolate (16.7%), as shown in Fig. and Supplementary Table 2. n = 6) The heatmap (Fig. ) categorizes six MDR-MRSA isolates into two main groups (G1 and G2) based on their susceptibility to 14 tested antibiotics, two genes encoding methicillin-resistance, and one virulence gene. The top of the heat map (C1 and C2) shows the phenotypic resistance profile with methicillin resistance and virulence genes tested. Within the clustering, two MDR-MRSA isolates (SA.8 and SA.16) had nearly identical resistance profiles, with SA.8 carrying mecA and mecC genes and SA.16 carrying mecA alone. Both isolates were negative for tsst-1 virulence gene. Globally, food safety and public health are seriously threatened by the prevalent foodborne bacterium S. aureus . In this study, S. aureus was detected in 13 out of 33 pooled oyster samples, accounting for 39.4% of the samples. This reflects post-harvest contamination, possibly originating from inadequate storage and poor sanitation in the markets , . Workers handling oysters without protective clothes may inadvertently transfer S. aureus from their throats, nasal passages, or hands . Nevertheless, this does not preclude the possibility that S. aureus can naturally occur in oysters. Given that seafood is protein-rich, it provides an ideal environment for the growth of S. aureus . Antibiotic resistance is a growing global issue, and several studies have documented drug-resistant S. aureus in seafood , , , . In the current study, 77% (10/13) of the isolates were classified as MDR, with MARI values exceeding 0.2, indicating that the isolates originated from a source where antibiotics were used to a great degree and/or in large amounts . The prevalence of MRSA infections and colonization in bivalves and other seafood has steadily increased over time , . A study found that 46.2% (6 out of 13) of S. aureus isolates from oysters were MRSA, a result consistent with findings in ready-to-eat shellfish in Nigeria . The widespread and uncontrolled use of beta-lactam antibiotics in aquaculture environments, including those where oysters are harvested, contributes to the emergence of MRSA , . Notably, all the MRSA isolates obtained were MDR isolates. The evolution of MRSA isolates into MDR forms is a complex interplay of genetic changes, and selective pressures exerted by exposure to various antibiotics , . Therefore, MRSA is placed second on the list of bacteria of high priority for research and development of new antibiotics . The emergence of MDR-MRSA in seafood poses a significant public health risk and raises concerns about potential transmission to humans, emphasizing their zoonotic potential , . The current study revealed that one MRSA isolate from oysters showed susceptibility to ampicillin. This unexpected result may be attributed to hetero-resistance, a phenomenon where a subpopulation of MRSA bacteria remains susceptible to beta-lactams while the majority is resistant . Additionally, some MRSA isolates exhibit non-susceptibility to clindamycin and trimethoprim-sulfamethoxazole, which are typically used to treat MRSA infections , . This finding alarms the public health community that MRSA becomes more resistant to advanced antibiotics, which may make it non-curable in the future. In the context of MRSA genes, the current study found that 66.7% (4 out of 6) of the isolates harbored the mecA gene, which is frequently detected and serves as the gold standard for identifying MRSA , . In contrast, 1 out of 6 isolates (16.7%) carried the mecC gene which is aligns with a study conducted in Egypt by Shebl et al. , which identified the mecC gene in three MRSA isolates, representing 6% of the total isolates. However, other studies globally have reported that certain MRSA isolates exhibit phenotypic resistance despite the absence of mecA or mecC genes – . This resistance is likely due to the presence of alternative genes that confer beta-lactam resistance in MRSA . Additionally, genetic mutations that alter the target site of beta-lactam antibiotics and the overproduction of β-lactamase enzymes are significant contributing factors , . To ensure comprehensive detection, combining both genotypic and phenotypic methods is advisable, minimizing the risk of missing genetically divergent strains. S. aureus produces a remarkable range of virulence factors that facilitate their pathogenicity. The current study is the first of its kind to identify the toxic shock syndrome toxin gene ( tsst-1 ) in one out of six (16.7%) MDR- MRSA isolates found in Egyptian oysters. This finding highlights the heightened toxicity associated with this strain, potentially endangering both seafood handlers and public consumers since they may acquire this virulence gene through the food chain, leading to serious diseases with limited treatment options – . In the investigation of pathogen diversity and evolution, MDR-MRSA isolates were clustered based on their antimicrobial resistance phenotypes with methicillin-resistant ( mecA & mecC ) and virulence gene carriage ( tsst-1 ) using the pheatmap library. Notably, isolates (SA.8 and SA.16) displayed nearly identical resistance and virulence profiles. This similarity suggests a common source, possibly contaminated water, handling practices, or processing facilities. Additionally, these isolates may share common suppliers or geographic origins, contributing to similar microbial populations, aligning with findings from studies by Chen et al. and Yu et al. . The study highlights significant epidemiological concerns by identifying the prevalence of MDR-MRSA in retail oysters in Egypt, raising critical public health and food safety issues. These findings enhance our understanding of the development and dissemination of antibiotic resistance within aquatic ecosystems. Furthermore, the study emphasizes the need for enhanced sanitary education for food handlers, who may act as reservoirs and vectors for MRSA. Future investigations employing larger sample sizes and advanced genomic methodologies are essential to deepen insights into this pressing issue. Below is the link to the electronic supplementary material. Supplementary Material 1
Proteomic dementia risk assessment in hypertension
0fcbe0ad-4415-499f-86ba-71000a516f1e
11716156
Biochemistry[mh]
A Retrospective Age Analysis of the Ambulatory Oncology Patient Satisfaction Survey: Differences in Satisfaction across Dimensions of Person-Centred Care and Unmet Needs among Older Adults Receiving Cancer Treatment
564f39d1-eddc-43d1-8d7c-744b4ed3ca24
10969488
Internal Medicine[mh]
In Alberta, Canada, 57% of new cancer cases were diagnosed among people aged 65+ years in 2021 . Age-related health, functional, psychosocial, and existential changes impact cancer care experiences, preferences, and outcomes ; however, little programmatic attention has been given to the unique concerns of this population in Alberta. Documented disparities among older adults with cancer, including over- and under-treatment , slower improvements in survival , unmet needs , and a lack of research , suggest that age-related changes may not be adequately addressed in cancer care. Given that the number of older Albertans has more than doubled in the past 20 years and is expected to nearly double again in the next 20 years , Cancer Care Alberta must strategically prepare to address older adults’ particular care needs. Patient-reported experience measures (PREMs) provide insights into patients’ perceptions of their personal care experiences. They play a critical role in health system quality improvement through the production of knowledge to inform the development of health practices and policies that align with patient experiences and needs . Specifically, PREMs can be used to address inequalities in care experiences, identifying groups of patients who report poorer experiences of care to prioritize initiatives that optimize experiences and outcomes . The Ambulatory Oncology Patient Satisfaction Survey (AOPSS) is a PREM used across many Canadian provinces to assess patients’ cancer care experiences. Since 2004, the AOPSS has been distributed every two years to people receiving cancer care in Alberta. Analysis has provided important insights to inform quality improvement initiatives and cancer care innovations in Alberta and in other provinces ; however, little consideration has been given to the age-specific experiences of older adults with cancer. In AOPSS analyses, adults aged 65+ are often grouped together, creating a single group that typically includes more than half of all respondents . However, the life stage and experiences of a 65-year-old may differ greatly from those of a 75- or 85-year-old. Therefore, sociologists break down the older adult population into life-stage subgroups, often identified as the young–old (65–74 years), the middle–old (75–84 years), and the old–old (85+ years) . Although age-related changes happen at different times and in different ways for different people , resulting in vast heterogeneity in health and functional status among older adults within each of these chronological groups , the consideration of the differences among these age groups begins to recognize the heterogeneity among older adults. Studies of patient satisfaction with cancer care in other jurisdictions that break down the older adult population into subgroups have shown lower levels of satisfaction in very old adults . However, differences in healthcare context and available resources can have an important impact on satisfaction. Little is known about cancer care satisfaction among subgroups of older adults in the Canadian—and specifically the Alberta—context. Therefore, the purpose of this study was to better understand the concerns and needs of older Albertans with cancer through a retrospective age analysis of the 2021 Alberta AOPSS. Specifically, we explored age differences in satisfaction across six dimensions of person-centred care and in the proportion of unmet needs across eight types of issues among adults receiving cancer care in Alberta, with particular attention to older adults. The findings may be used to inform the implementation of health system innovations that improve the experiences and outcomes of older people with cancer in Alberta and their families. 2.1. Design We used a retrospective exploratory design to conduct a secondary age analysis of the 2021 AOPSS Alberta dataset. 2.2. Procedure In Alberta, the AOPSS was distributed from February to May 2021. In February, potential respondents received a package in the mail containing an information sheet; a paper copy of the survey; a self-addressed, stamped return envelope; and a link to an online version of the survey with a unique patient identifier code. In March, a reminder was sent to those who had not yet returned the survey. 2.3. Respondents A total of 4000 survey packages were mailed to patients with a cancer diagnosis who had received at least one systemic (intravenous or oral route) or radiation treatment at one of the 17 ambulatory cancer centres in Alberta in the previous 6 months. Some participants may have received both types of treatment. Eligible patients were identified from the Alberta Cancer Registry. A random sample of eligible patients was taken for the 2 metro cancer centres in Calgary and Edmonton. To ensure adequate representation of those living in smaller urban, rural, and remote areas, a census sample of patients was taken for the 4 regional and 11 community cancer centres. 2.4. Measure The AOPSS was developed and nationally validated by the National Research Corporation (NRC) in 2003 . After minor changes, the revised tool was again validated in 2012 . Administered across many jurisdictions in Canada, the NRC managed the survey and maintained a national dashboard of results until 2023. The AOPSS contains 97 questions . Of these, 82 questions address experiences across the trajectory of cancer care. From these, the NRC identified 44 core questions to construct six validated dimensions of person-centred care, including (1) respect for patient preferences; (2) physical comfort; (3) access to care; (4) coordination and continuity of care; (5) information, communication, and education; and (6) emotional support . The AOPSS also includes a question asking, ‘Did you get all of the help you wanted to cope with the following? (a) Practical issues (e.g., transportation, accommodation), (b) Financial issues (e.g., costs of treatments), (c) Social/family issues (e.g., worry about friends and family), (d) Emotional issues (e.g., fears and worries, sadness), (e) Spiritual issues (meaning/purpose of life, faith), (f) Informational issues (e.g., understanding your illness, talking with the healthcare team), (g) Physical issues (e.g., pain, fatigue), (h) Sexual health issues’ (No; Yes, somewhat; Yes, mostly; and Yes, definitely). In addition, a question about the overall rating of care asked, ‘Overall, how would you rate the quality of care at [cancer centre name] in the past 6 months?’ (Excellent, Very Good, Good, Fair, Poor) . In the 2021 AOPSS, five Alberta-specific questions addressed experiences and satisfaction with virtual care, the goal of treatment, prognosis and advance care planning, and the involvement of the family physician. The survey ends with seven sociodemographic questions and an open-ended question for any other comments the respondent would like to make about their cancer care services. Respondents chose to complete and return a paper copy of the survey or to complete the survey online. A contact phone number was provided to answer any survey-related questions. In addition to the AOPSS survey data, the following associated cancer registry data were used: age (at survey distribution), tumor group, type of treatment received, cancer centre, and the first three digits of each respondent’s home postal code (forward sortation area, FSA). 2.5. Analysis We selected the age groups for our analysis based on accepted definitions, ensuring that all groups included 50 or more respondents. In Alberta, young adults with cancer are defined as those aged 15–39 years and older adults are typically defined as those aged 65+ years . We used established sociological definitions to further categorize older adults as young–old (65–74 years), middle–old (75–84 years), and old–old (85+ years) . The location data available were limited to the FSA for each respondent’s home address. With the assistance of a data and geospatial resources specialist, we used Geographic Information System (GIS) software, ArcGIS Pro version 2.9, to determine rurality by associating each FSA with the largest overlapping Alberta Health Services 2018 Official Local Geographic Area (LGA) and its corresponding classification on the Alberta Health Services Rural–Urban Continuum . The following groupings were used for the rural–urban continuum classifications: ‘metro’ included metro centres (Edmonton, Calgary) and metro-influenced areas; ‘urban’ included only urban centres (Grand Prairie, Fort McMurray, Red Deer, Lethbridge, Medicine Hat); ‘rural and remote’ included moderately urban-influenced areas, large rural centres and surrounding areas, rural areas, and rural remote areas . The sociodemographic, health, and clinical characteristics of the patients were analyzed using descriptive statistics. We tested for significant differences across age groups using Pearson’s chi-square tests for nominal variables and independent-samples Kruskal–Wallis tests for ordinal variables. For these tests, we used pairwise (test-by-test) deletion of respondents with missing data. For variables tested with Pearson chi-square tests, if the cells had an expected count of 5 or less, the response levels were combined or Monte Carlo estimates of exact p -values were used. For Pearson’s chi-square tests, we performed post hoc tests using z tests for independent proportions. For independent-samples Kruskal–Wallis tests, we performed post hoc tests using pairwise comparisons. In both cases, p -values were adjusted using the Bonferroni correction for multiple tests. The primary outcome of interest was person-centred care, assessed using the six dimensions listed above. We treated the scores for each dimension as continuous variables. To investigate the impact of the patients’ age group on perceived person-centred care, we conducted a one-way multivariate analysis of variance (MANOVA). The MANOVA model allowed for a single statistical test across all six dimensions of person-centred care, reducing the risk of false positive results, which can occur when conducting separate tests for each dimension. For the MANOVA, we used listwise deletion of respondents with missing data on any dimension. The equality of covariance matrices was tested using Box’s test and the equality of error variances was tested using Levene’s test based on the median, which is more robust than the mean for skewed data . If a significant main effect of the age group was observed, further post hoc tests were performed using Fisher’s least significant difference test to determine the specific differences for each dimension between age groups. The first one-way MANOVA investigated the impact of using a three-level age grouping (18–39, 40–64, and 65+) as the independent variable on the six person-centred dimension scores, which served as the dependent variables. The second one-way MANOVA investigated the impact of using a five-level age grouping (18–39, 40–64, 65–74. 75–84, and 85+) as the independent variable on the six person-centred dimension scores, which served as the dependent variables. We used independent-samples Kruskal–Wallis tests to compare unmet needs and the overall rating of satisfaction with cancer centre care, which were constructed as ordinal variables using all response categories, across age groups. For these tests, we used pairwise (test-by-test) deletion of respondents with missing data. Post hoc pairwise comparisons were conducted when we found a significant difference across age groups, with p -values adjusted using the Bonferroni correction for multiple tests. Initial data cleaning, exploration, and figure creation were done using Microsoft Excel for Microsoft 365 MSO (Version 2302). For statistical tests, the data were exported to IBM SPSS Statistics (Version 25 or 27) for analysis, and a predetermined level of statistical significance was set at p < 0.05. We used a retrospective exploratory design to conduct a secondary age analysis of the 2021 AOPSS Alberta dataset. In Alberta, the AOPSS was distributed from February to May 2021. In February, potential respondents received a package in the mail containing an information sheet; a paper copy of the survey; a self-addressed, stamped return envelope; and a link to an online version of the survey with a unique patient identifier code. In March, a reminder was sent to those who had not yet returned the survey. A total of 4000 survey packages were mailed to patients with a cancer diagnosis who had received at least one systemic (intravenous or oral route) or radiation treatment at one of the 17 ambulatory cancer centres in Alberta in the previous 6 months. Some participants may have received both types of treatment. Eligible patients were identified from the Alberta Cancer Registry. A random sample of eligible patients was taken for the 2 metro cancer centres in Calgary and Edmonton. To ensure adequate representation of those living in smaller urban, rural, and remote areas, a census sample of patients was taken for the 4 regional and 11 community cancer centres. The AOPSS was developed and nationally validated by the National Research Corporation (NRC) in 2003 . After minor changes, the revised tool was again validated in 2012 . Administered across many jurisdictions in Canada, the NRC managed the survey and maintained a national dashboard of results until 2023. The AOPSS contains 97 questions . Of these, 82 questions address experiences across the trajectory of cancer care. From these, the NRC identified 44 core questions to construct six validated dimensions of person-centred care, including (1) respect for patient preferences; (2) physical comfort; (3) access to care; (4) coordination and continuity of care; (5) information, communication, and education; and (6) emotional support . The AOPSS also includes a question asking, ‘Did you get all of the help you wanted to cope with the following? (a) Practical issues (e.g., transportation, accommodation), (b) Financial issues (e.g., costs of treatments), (c) Social/family issues (e.g., worry about friends and family), (d) Emotional issues (e.g., fears and worries, sadness), (e) Spiritual issues (meaning/purpose of life, faith), (f) Informational issues (e.g., understanding your illness, talking with the healthcare team), (g) Physical issues (e.g., pain, fatigue), (h) Sexual health issues’ (No; Yes, somewhat; Yes, mostly; and Yes, definitely). In addition, a question about the overall rating of care asked, ‘Overall, how would you rate the quality of care at [cancer centre name] in the past 6 months?’ (Excellent, Very Good, Good, Fair, Poor) . In the 2021 AOPSS, five Alberta-specific questions addressed experiences and satisfaction with virtual care, the goal of treatment, prognosis and advance care planning, and the involvement of the family physician. The survey ends with seven sociodemographic questions and an open-ended question for any other comments the respondent would like to make about their cancer care services. Respondents chose to complete and return a paper copy of the survey or to complete the survey online. A contact phone number was provided to answer any survey-related questions. In addition to the AOPSS survey data, the following associated cancer registry data were used: age (at survey distribution), tumor group, type of treatment received, cancer centre, and the first three digits of each respondent’s home postal code (forward sortation area, FSA). We selected the age groups for our analysis based on accepted definitions, ensuring that all groups included 50 or more respondents. In Alberta, young adults with cancer are defined as those aged 15–39 years and older adults are typically defined as those aged 65+ years . We used established sociological definitions to further categorize older adults as young–old (65–74 years), middle–old (75–84 years), and old–old (85+ years) . The location data available were limited to the FSA for each respondent’s home address. With the assistance of a data and geospatial resources specialist, we used Geographic Information System (GIS) software, ArcGIS Pro version 2.9, to determine rurality by associating each FSA with the largest overlapping Alberta Health Services 2018 Official Local Geographic Area (LGA) and its corresponding classification on the Alberta Health Services Rural–Urban Continuum . The following groupings were used for the rural–urban continuum classifications: ‘metro’ included metro centres (Edmonton, Calgary) and metro-influenced areas; ‘urban’ included only urban centres (Grand Prairie, Fort McMurray, Red Deer, Lethbridge, Medicine Hat); ‘rural and remote’ included moderately urban-influenced areas, large rural centres and surrounding areas, rural areas, and rural remote areas . The sociodemographic, health, and clinical characteristics of the patients were analyzed using descriptive statistics. We tested for significant differences across age groups using Pearson’s chi-square tests for nominal variables and independent-samples Kruskal–Wallis tests for ordinal variables. For these tests, we used pairwise (test-by-test) deletion of respondents with missing data. For variables tested with Pearson chi-square tests, if the cells had an expected count of 5 or less, the response levels were combined or Monte Carlo estimates of exact p -values were used. For Pearson’s chi-square tests, we performed post hoc tests using z tests for independent proportions. For independent-samples Kruskal–Wallis tests, we performed post hoc tests using pairwise comparisons. In both cases, p -values were adjusted using the Bonferroni correction for multiple tests. The primary outcome of interest was person-centred care, assessed using the six dimensions listed above. We treated the scores for each dimension as continuous variables. To investigate the impact of the patients’ age group on perceived person-centred care, we conducted a one-way multivariate analysis of variance (MANOVA). The MANOVA model allowed for a single statistical test across all six dimensions of person-centred care, reducing the risk of false positive results, which can occur when conducting separate tests for each dimension. For the MANOVA, we used listwise deletion of respondents with missing data on any dimension. The equality of covariance matrices was tested using Box’s test and the equality of error variances was tested using Levene’s test based on the median, which is more robust than the mean for skewed data . If a significant main effect of the age group was observed, further post hoc tests were performed using Fisher’s least significant difference test to determine the specific differences for each dimension between age groups. The first one-way MANOVA investigated the impact of using a three-level age grouping (18–39, 40–64, and 65+) as the independent variable on the six person-centred dimension scores, which served as the dependent variables. The second one-way MANOVA investigated the impact of using a five-level age grouping (18–39, 40–64, 65–74. 75–84, and 85+) as the independent variable on the six person-centred dimension scores, which served as the dependent variables. We used independent-samples Kruskal–Wallis tests to compare unmet needs and the overall rating of satisfaction with cancer centre care, which were constructed as ordinal variables using all response categories, across age groups. For these tests, we used pairwise (test-by-test) deletion of respondents with missing data. Post hoc pairwise comparisons were conducted when we found a significant difference across age groups, with p -values adjusted using the Bonferroni correction for multiple tests. Initial data cleaning, exploration, and figure creation were done using Microsoft Excel for Microsoft 365 MSO (Version 2302). For statistical tests, the data were exported to IBM SPSS Statistics (Version 25 or 27) for analysis, and a predetermined level of statistical significance was set at p < 0.05. Of the 4000 surveys distributed, 39 (1.0%) were undeliverable and 2204 were returned, giving a 55.6% response rate. 3.1. Sociodemographic, Health, and Clinical Characteristics 3.1.1. Age Distribution of AOPSS Respondents The age distribution of the survey respondents ( a) reflects the age distribution of new cancer diagnoses in Alberta ( b) and of people with cancer who attended a Cancer Care Alberta facility ( c) in 2021, with a notable over-representation of those aged 65–74 years and under-representation of the youngest and oldest groups. 3.1.2. Sociodemographic and Health Characteristics There were significant differences across age groups for sex, education, the person who completed the survey, and self-rated health . Post hoc pairwise comparisons between age groups are presented in , . The oldest age group (85+ years) had the highest proportion of male respondents ( n = 51, 58.0%) and of respondents with less than high school education ( n = 35, 39.8%). Although not significantly different across age groups, the highest proportion of those living in rural and remote areas was among those aged 65–74 years ( n = 404. 50.2%), closely followed by those aged 85+ years ( n = 44, 50.0%) and 75–84 years ( n = 227, 48.9%). The proportion of surveys completed with help or by someone other than the patient in the 85+ age group ( n = 36, 36.4%) was significantly higher than for all other age groups and that in the 75–84 age group ( n = 82, 17.7%) was significantly higher than the 40–64 and 65–74 age groups ( and ). Generally, the proportion of those reporting excellent or very good health decreased with age, while the proportion of those reporting fair or poor health increased with age . Specifically, those aged 85+ reported significantly poorer health than those aged 18–39, 40–64, and 65–74 years . Notably, the proportion of those reporting good health remained relatively constant across age groups . 3.1.3. Clinical Characteristics There were statistically significant differences across age groups with respect to the tumor site, time since diagnosis, type of cancer centre, involvement of the family doctor, treatment intent, and type of treatment(s) received (for IV and oral chemotherapy) . Post hoc pairwise comparisons between age groups are presented in , . In the younger age groups (18–39 years and 40–64 years), the highest proportion of respondents had breast cancers ( n = 29, 46.0% and n = 248, 31.6%, respectively). In the older age groups (65–74, 75–84, and 85+ years), the highest proportion of respondents had hematological cancers and the proportion increased with age ( n = 214, 26.6%; n = 153, 33.0%; and n = 39, 44.3%, respectively). This was also reflected in the post hoc comparisons . Among the oldest respondents (85+ years), most had been living with cancer for two or more years ( n = 58, 65.9%), whereas, among the youngest respondents (18–39 years), most had received their diagnosis within the past year ( n = 35, 55.6%). Accordingly, those aged 18–39 years had a significantly shorter time since diagnosis than all other age groups and those aged 85+ years had a significantly longer time since diagnosis than all other age groups . The young adult group (18–39 years) had the highest proportion of respondents receiving care at a metropolitan cancer centre ; however, the post hoc comparisons did not show any significant differences between age groups . The proportions of those receiving care at regional cancer centres were, however, significantly higher among those aged 65–74 years ( n = 246, 30.6%) and 75–84 years ( n = 154, 33.2%) than those aged 40–64 years ( n = 181, 23.1%) ( and ). The highest proportion of those receiving care at community (rural) cancer centres was in the middle-aged group (40–64 years; n = 122, 15.6%); however, the post hoc comparisons again did not show any significant differences across age groups . There was an increase with age in the proportion of respondents who reported that their family doctor was very involved and the proportion of respondents unsure about their family doctor’s involvement also increased with age . However, the only significant differences were in those aged 40–64 years having significantly less family doctor involvement than those aged 75–84 and 85+ years .The proportion of respondents reporting a treatment intent of control, rather than cure, also increased with age, with almost two thirds ( n = 54, 61.4%) of those in the oldest group (85+ years) reporting control as the treatment intent , a significantly greater proportion than all other age groups . Interestingly, the oldest age group also showed the highest proportion of respondents who were unsure about their treatment intent ( n = 6, 6.8%) or left the question unanswered ( n = 12, 13.6%). 3.2. Dimensions of Person-Centred Care 3.2.1. Three-Level Age Grouping For the six dimensions of person-centred care, when three age groups were used, the differences across age groups primarily pointed towards lower satisfaction for young adults (18–39 years, ). For the MANOVA, we included 781 respondents who had complete data across all dimensions (18–39 years, n = 39; 40–64 years, n = 361; 65+ years, n = 381). The MANOVA results showed a statistically significant difference in perceived person-centred care based on patients’ three-level age groups, Pillai’s trace = 0.032, F(12, 1548) = 2.104, p = 0.014. The assumption of equality of covariance matrices in MANOVA was not violated (Box’s M = 51.3, p = 0.198). Although Levene’s median-based test for equal variances suggested that the assumption of homogeneity of variances was met in five dimensions, it was violated for the ‘physical comfort’ dimension, F(2, 778) = 5.50, p = 0.004. Nevertheless, we proceeded with the MANOVA, considering its robustness against slight violations of this assumption and reporting Pillai’s trace, which is the most robust test statistic when assumptions are violated . Post hoc comparisons showed significantly lower levels of satisfaction for younger adults in the ‘access to care’ and ‘coordination and continuity of care’ dimensions ( p < 0.05). Satisfaction for older adults aged 65+ was similar to or higher than satisfaction in the younger age groups for all dimensions except physical comfort, in which satisfaction for those 65+ years was significantly lower than for those aged 40–64 years ( p < 0.05). Detailed statistics are presented in . 3.2.2. Five-Level Age Grouping When five age groups were used to explore the age differences among the dimensions of person-centred care, a different story emerged. Decreasing patterns of satisfaction for older adults aged 75–84 years were evident across most dimensions, and those aged 85+ years showed levels of satisfaction lower than those aged 65–74 years on all dimensions of person-centred care . For the MANOVA, we included 781 respondents who had complete data across all dimensions (18–39 years, n = 39; 40–64 years, n = 361; 65–74 years, n = 264; 75–84 years, n = 105; 85+ years, n = 12). The MANOVA results showed a statistically significant difference in perceived person-centred care based on a patient’s five-level age group, Pillai’s trace = 0.059, F(24, 3096) = 1.94, p = 0.004. The assumption of equality of covariance matrices in the MANOVA was met (Box’s M = 95.2, p = 0.373). Although Levene’s median-based test for equal variances suggested that the assumption of homogeneity of variances was met in four dimensions, it was violated for the ‘physical comfort’ dimension, F(4, 776) = 2.98 p = 0.019, and the ‘emotional support’ dimension, F(4, 776) = 2.78, p = 0.026. Nevertheless, we proceeded with the MANOVA, considering its robustness against slight violations of this assumption and reporting Pillai’s trace, which is the most robust test statistic when assumptions are violated . Post hoc comparisons showed significantly lower satisfaction for those aged 75–84 and 85+ years on several dimensions. Specifically, older adults aged 75–84 years and 85+ years showed significantly lower satisfaction in the ‘coordination and continuity of care’ dimension than adults aged 40–64 years ( p < 0.05). Older adults aged 85+ years also showed significantly lower satisfaction in this dimension than adults aged 65–74 years ( p < 0.05). Moreover, adults aged 85+ years showed significantly lower satisfaction than those aged 40–64, 65–74, and 75–84 years on the ‘information, communication, and education’ dimension ( p < 0.05). Consistent with the three-level age group analysis, older adults aged 65–74 years and 75–84 years showed significantly lower levels of satisfaction on the ‘physical comfort’ dimension than adults aged 40–64 ( p < 0.05). Adults aged 85+ had the lowest level of satisfaction on the ‘physical comfort’ dimension; however, this did not show significance in the post hoc tests due to the smaller sample size. Detailed statistics are presented in . 3.3. Unmet Needs When respondents were asked if they had received the help that they wanted related to eight types of issues, the proportion of respondents who answered ‘no’ generally increased with age across all types of issues . The highest proportion of unmet needs was found among those aged 75–84 and/or 85+ years across all types of issues . The differences in responses across age groups were significant for emotional, financial, social/family, and sexual health issues ( p < 0.05). The differences across age groups for practical and spiritual issues were also nearing statistical significance ( p = 0.099 and p = 0.069, respectively). Detailed statistics for the significance testing, including all response categories, and pairwise comparisons are reported in . 3.4. Overall Ratings of Care across Five Age Groups There was a statistically significant difference across age groups when respondents were asked about their overall quality of care, H(4) = 12.84, p = 0.012. The post hoc pairwise comparisons demonstrated that those aged 85+ years reported significantly lower satisfaction with their overall quality of care than those aged 65–74 years ( p < 0.05). Detailed statistics are presented in . 3.1.1. Age Distribution of AOPSS Respondents The age distribution of the survey respondents ( a) reflects the age distribution of new cancer diagnoses in Alberta ( b) and of people with cancer who attended a Cancer Care Alberta facility ( c) in 2021, with a notable over-representation of those aged 65–74 years and under-representation of the youngest and oldest groups. 3.1.2. Sociodemographic and Health Characteristics There were significant differences across age groups for sex, education, the person who completed the survey, and self-rated health . Post hoc pairwise comparisons between age groups are presented in , . The oldest age group (85+ years) had the highest proportion of male respondents ( n = 51, 58.0%) and of respondents with less than high school education ( n = 35, 39.8%). Although not significantly different across age groups, the highest proportion of those living in rural and remote areas was among those aged 65–74 years ( n = 404. 50.2%), closely followed by those aged 85+ years ( n = 44, 50.0%) and 75–84 years ( n = 227, 48.9%). The proportion of surveys completed with help or by someone other than the patient in the 85+ age group ( n = 36, 36.4%) was significantly higher than for all other age groups and that in the 75–84 age group ( n = 82, 17.7%) was significantly higher than the 40–64 and 65–74 age groups ( and ). Generally, the proportion of those reporting excellent or very good health decreased with age, while the proportion of those reporting fair or poor health increased with age . Specifically, those aged 85+ reported significantly poorer health than those aged 18–39, 40–64, and 65–74 years . Notably, the proportion of those reporting good health remained relatively constant across age groups . 3.1.3. Clinical Characteristics There were statistically significant differences across age groups with respect to the tumor site, time since diagnosis, type of cancer centre, involvement of the family doctor, treatment intent, and type of treatment(s) received (for IV and oral chemotherapy) . Post hoc pairwise comparisons between age groups are presented in , . In the younger age groups (18–39 years and 40–64 years), the highest proportion of respondents had breast cancers ( n = 29, 46.0% and n = 248, 31.6%, respectively). In the older age groups (65–74, 75–84, and 85+ years), the highest proportion of respondents had hematological cancers and the proportion increased with age ( n = 214, 26.6%; n = 153, 33.0%; and n = 39, 44.3%, respectively). This was also reflected in the post hoc comparisons . Among the oldest respondents (85+ years), most had been living with cancer for two or more years ( n = 58, 65.9%), whereas, among the youngest respondents (18–39 years), most had received their diagnosis within the past year ( n = 35, 55.6%). Accordingly, those aged 18–39 years had a significantly shorter time since diagnosis than all other age groups and those aged 85+ years had a significantly longer time since diagnosis than all other age groups . The young adult group (18–39 years) had the highest proportion of respondents receiving care at a metropolitan cancer centre ; however, the post hoc comparisons did not show any significant differences between age groups . The proportions of those receiving care at regional cancer centres were, however, significantly higher among those aged 65–74 years ( n = 246, 30.6%) and 75–84 years ( n = 154, 33.2%) than those aged 40–64 years ( n = 181, 23.1%) ( and ). The highest proportion of those receiving care at community (rural) cancer centres was in the middle-aged group (40–64 years; n = 122, 15.6%); however, the post hoc comparisons again did not show any significant differences across age groups . There was an increase with age in the proportion of respondents who reported that their family doctor was very involved and the proportion of respondents unsure about their family doctor’s involvement also increased with age . However, the only significant differences were in those aged 40–64 years having significantly less family doctor involvement than those aged 75–84 and 85+ years .The proportion of respondents reporting a treatment intent of control, rather than cure, also increased with age, with almost two thirds ( n = 54, 61.4%) of those in the oldest group (85+ years) reporting control as the treatment intent , a significantly greater proportion than all other age groups . Interestingly, the oldest age group also showed the highest proportion of respondents who were unsure about their treatment intent ( n = 6, 6.8%) or left the question unanswered ( n = 12, 13.6%). The age distribution of the survey respondents ( a) reflects the age distribution of new cancer diagnoses in Alberta ( b) and of people with cancer who attended a Cancer Care Alberta facility ( c) in 2021, with a notable over-representation of those aged 65–74 years and under-representation of the youngest and oldest groups. There were significant differences across age groups for sex, education, the person who completed the survey, and self-rated health . Post hoc pairwise comparisons between age groups are presented in , . The oldest age group (85+ years) had the highest proportion of male respondents ( n = 51, 58.0%) and of respondents with less than high school education ( n = 35, 39.8%). Although not significantly different across age groups, the highest proportion of those living in rural and remote areas was among those aged 65–74 years ( n = 404. 50.2%), closely followed by those aged 85+ years ( n = 44, 50.0%) and 75–84 years ( n = 227, 48.9%). The proportion of surveys completed with help or by someone other than the patient in the 85+ age group ( n = 36, 36.4%) was significantly higher than for all other age groups and that in the 75–84 age group ( n = 82, 17.7%) was significantly higher than the 40–64 and 65–74 age groups ( and ). Generally, the proportion of those reporting excellent or very good health decreased with age, while the proportion of those reporting fair or poor health increased with age . Specifically, those aged 85+ reported significantly poorer health than those aged 18–39, 40–64, and 65–74 years . Notably, the proportion of those reporting good health remained relatively constant across age groups . There were statistically significant differences across age groups with respect to the tumor site, time since diagnosis, type of cancer centre, involvement of the family doctor, treatment intent, and type of treatment(s) received (for IV and oral chemotherapy) . Post hoc pairwise comparisons between age groups are presented in , . In the younger age groups (18–39 years and 40–64 years), the highest proportion of respondents had breast cancers ( n = 29, 46.0% and n = 248, 31.6%, respectively). In the older age groups (65–74, 75–84, and 85+ years), the highest proportion of respondents had hematological cancers and the proportion increased with age ( n = 214, 26.6%; n = 153, 33.0%; and n = 39, 44.3%, respectively). This was also reflected in the post hoc comparisons . Among the oldest respondents (85+ years), most had been living with cancer for two or more years ( n = 58, 65.9%), whereas, among the youngest respondents (18–39 years), most had received their diagnosis within the past year ( n = 35, 55.6%). Accordingly, those aged 18–39 years had a significantly shorter time since diagnosis than all other age groups and those aged 85+ years had a significantly longer time since diagnosis than all other age groups . The young adult group (18–39 years) had the highest proportion of respondents receiving care at a metropolitan cancer centre ; however, the post hoc comparisons did not show any significant differences between age groups . The proportions of those receiving care at regional cancer centres were, however, significantly higher among those aged 65–74 years ( n = 246, 30.6%) and 75–84 years ( n = 154, 33.2%) than those aged 40–64 years ( n = 181, 23.1%) ( and ). The highest proportion of those receiving care at community (rural) cancer centres was in the middle-aged group (40–64 years; n = 122, 15.6%); however, the post hoc comparisons again did not show any significant differences across age groups . There was an increase with age in the proportion of respondents who reported that their family doctor was very involved and the proportion of respondents unsure about their family doctor’s involvement also increased with age . However, the only significant differences were in those aged 40–64 years having significantly less family doctor involvement than those aged 75–84 and 85+ years .The proportion of respondents reporting a treatment intent of control, rather than cure, also increased with age, with almost two thirds ( n = 54, 61.4%) of those in the oldest group (85+ years) reporting control as the treatment intent , a significantly greater proportion than all other age groups . Interestingly, the oldest age group also showed the highest proportion of respondents who were unsure about their treatment intent ( n = 6, 6.8%) or left the question unanswered ( n = 12, 13.6%). 3.2.1. Three-Level Age Grouping For the six dimensions of person-centred care, when three age groups were used, the differences across age groups primarily pointed towards lower satisfaction for young adults (18–39 years, ). For the MANOVA, we included 781 respondents who had complete data across all dimensions (18–39 years, n = 39; 40–64 years, n = 361; 65+ years, n = 381). The MANOVA results showed a statistically significant difference in perceived person-centred care based on patients’ three-level age groups, Pillai’s trace = 0.032, F(12, 1548) = 2.104, p = 0.014. The assumption of equality of covariance matrices in MANOVA was not violated (Box’s M = 51.3, p = 0.198). Although Levene’s median-based test for equal variances suggested that the assumption of homogeneity of variances was met in five dimensions, it was violated for the ‘physical comfort’ dimension, F(2, 778) = 5.50, p = 0.004. Nevertheless, we proceeded with the MANOVA, considering its robustness against slight violations of this assumption and reporting Pillai’s trace, which is the most robust test statistic when assumptions are violated . Post hoc comparisons showed significantly lower levels of satisfaction for younger adults in the ‘access to care’ and ‘coordination and continuity of care’ dimensions ( p < 0.05). Satisfaction for older adults aged 65+ was similar to or higher than satisfaction in the younger age groups for all dimensions except physical comfort, in which satisfaction for those 65+ years was significantly lower than for those aged 40–64 years ( p < 0.05). Detailed statistics are presented in . 3.2.2. Five-Level Age Grouping When five age groups were used to explore the age differences among the dimensions of person-centred care, a different story emerged. Decreasing patterns of satisfaction for older adults aged 75–84 years were evident across most dimensions, and those aged 85+ years showed levels of satisfaction lower than those aged 65–74 years on all dimensions of person-centred care . For the MANOVA, we included 781 respondents who had complete data across all dimensions (18–39 years, n = 39; 40–64 years, n = 361; 65–74 years, n = 264; 75–84 years, n = 105; 85+ years, n = 12). The MANOVA results showed a statistically significant difference in perceived person-centred care based on a patient’s five-level age group, Pillai’s trace = 0.059, F(24, 3096) = 1.94, p = 0.004. The assumption of equality of covariance matrices in the MANOVA was met (Box’s M = 95.2, p = 0.373). Although Levene’s median-based test for equal variances suggested that the assumption of homogeneity of variances was met in four dimensions, it was violated for the ‘physical comfort’ dimension, F(4, 776) = 2.98 p = 0.019, and the ‘emotional support’ dimension, F(4, 776) = 2.78, p = 0.026. Nevertheless, we proceeded with the MANOVA, considering its robustness against slight violations of this assumption and reporting Pillai’s trace, which is the most robust test statistic when assumptions are violated . Post hoc comparisons showed significantly lower satisfaction for those aged 75–84 and 85+ years on several dimensions. Specifically, older adults aged 75–84 years and 85+ years showed significantly lower satisfaction in the ‘coordination and continuity of care’ dimension than adults aged 40–64 years ( p < 0.05). Older adults aged 85+ years also showed significantly lower satisfaction in this dimension than adults aged 65–74 years ( p < 0.05). Moreover, adults aged 85+ years showed significantly lower satisfaction than those aged 40–64, 65–74, and 75–84 years on the ‘information, communication, and education’ dimension ( p < 0.05). Consistent with the three-level age group analysis, older adults aged 65–74 years and 75–84 years showed significantly lower levels of satisfaction on the ‘physical comfort’ dimension than adults aged 40–64 ( p < 0.05). Adults aged 85+ had the lowest level of satisfaction on the ‘physical comfort’ dimension; however, this did not show significance in the post hoc tests due to the smaller sample size. Detailed statistics are presented in . For the six dimensions of person-centred care, when three age groups were used, the differences across age groups primarily pointed towards lower satisfaction for young adults (18–39 years, ). For the MANOVA, we included 781 respondents who had complete data across all dimensions (18–39 years, n = 39; 40–64 years, n = 361; 65+ years, n = 381). The MANOVA results showed a statistically significant difference in perceived person-centred care based on patients’ three-level age groups, Pillai’s trace = 0.032, F(12, 1548) = 2.104, p = 0.014. The assumption of equality of covariance matrices in MANOVA was not violated (Box’s M = 51.3, p = 0.198). Although Levene’s median-based test for equal variances suggested that the assumption of homogeneity of variances was met in five dimensions, it was violated for the ‘physical comfort’ dimension, F(2, 778) = 5.50, p = 0.004. Nevertheless, we proceeded with the MANOVA, considering its robustness against slight violations of this assumption and reporting Pillai’s trace, which is the most robust test statistic when assumptions are violated . Post hoc comparisons showed significantly lower levels of satisfaction for younger adults in the ‘access to care’ and ‘coordination and continuity of care’ dimensions ( p < 0.05). Satisfaction for older adults aged 65+ was similar to or higher than satisfaction in the younger age groups for all dimensions except physical comfort, in which satisfaction for those 65+ years was significantly lower than for those aged 40–64 years ( p < 0.05). Detailed statistics are presented in . When five age groups were used to explore the age differences among the dimensions of person-centred care, a different story emerged. Decreasing patterns of satisfaction for older adults aged 75–84 years were evident across most dimensions, and those aged 85+ years showed levels of satisfaction lower than those aged 65–74 years on all dimensions of person-centred care . For the MANOVA, we included 781 respondents who had complete data across all dimensions (18–39 years, n = 39; 40–64 years, n = 361; 65–74 years, n = 264; 75–84 years, n = 105; 85+ years, n = 12). The MANOVA results showed a statistically significant difference in perceived person-centred care based on a patient’s five-level age group, Pillai’s trace = 0.059, F(24, 3096) = 1.94, p = 0.004. The assumption of equality of covariance matrices in the MANOVA was met (Box’s M = 95.2, p = 0.373). Although Levene’s median-based test for equal variances suggested that the assumption of homogeneity of variances was met in four dimensions, it was violated for the ‘physical comfort’ dimension, F(4, 776) = 2.98 p = 0.019, and the ‘emotional support’ dimension, F(4, 776) = 2.78, p = 0.026. Nevertheless, we proceeded with the MANOVA, considering its robustness against slight violations of this assumption and reporting Pillai’s trace, which is the most robust test statistic when assumptions are violated . Post hoc comparisons showed significantly lower satisfaction for those aged 75–84 and 85+ years on several dimensions. Specifically, older adults aged 75–84 years and 85+ years showed significantly lower satisfaction in the ‘coordination and continuity of care’ dimension than adults aged 40–64 years ( p < 0.05). Older adults aged 85+ years also showed significantly lower satisfaction in this dimension than adults aged 65–74 years ( p < 0.05). Moreover, adults aged 85+ years showed significantly lower satisfaction than those aged 40–64, 65–74, and 75–84 years on the ‘information, communication, and education’ dimension ( p < 0.05). Consistent with the three-level age group analysis, older adults aged 65–74 years and 75–84 years showed significantly lower levels of satisfaction on the ‘physical comfort’ dimension than adults aged 40–64 ( p < 0.05). Adults aged 85+ had the lowest level of satisfaction on the ‘physical comfort’ dimension; however, this did not show significance in the post hoc tests due to the smaller sample size. Detailed statistics are presented in . When respondents were asked if they had received the help that they wanted related to eight types of issues, the proportion of respondents who answered ‘no’ generally increased with age across all types of issues . The highest proportion of unmet needs was found among those aged 75–84 and/or 85+ years across all types of issues . The differences in responses across age groups were significant for emotional, financial, social/family, and sexual health issues ( p < 0.05). The differences across age groups for practical and spiritual issues were also nearing statistical significance ( p = 0.099 and p = 0.069, respectively). Detailed statistics for the significance testing, including all response categories, and pairwise comparisons are reported in . There was a statistically significant difference across age groups when respondents were asked about their overall quality of care, H(4) = 12.84, p = 0.012. The post hoc pairwise comparisons demonstrated that those aged 85+ years reported significantly lower satisfaction with their overall quality of care than those aged 65–74 years ( p < 0.05). Detailed statistics are presented in . In this study, we conducted an age analysis of the 2021 Alberta AOPSS survey data. Our primary outcome of interest was satisfaction across six dimensions of person-centred care. When we used three age groups, those aged 65+ years showed levels of satisfaction approximately equal to or greater than those aged 18–39 years and 40–64 years on all dimensions of person-centred care, except for physical comfort, for which satisfaction was significantly lower for those aged 65+ years than those aged 40–64 years. However, when we used five age groups, dividing the older adults into three groups, a decreasing pattern of satisfaction for those aged 75–84 years was evident on most dimensions, and those aged 85+ years showed levels of satisfaction lower than those aged 65–74 years on all dimensions of person-centred care. The MANOVA results showed a statistically significant difference across age groups for both analyses. However, the post hoc comparisons with three age groups pointed towards lower levels of satisfaction primarily in the 18–39 years age group, except for the ‘physical comfort’ dimension, which was significantly lower in those aged 65+ years. With the five age groups, the post hoc analysis confirmed the significantly lower levels of satisfaction for older adults on the ‘physical comfort’ dimension, specifically among those aged 65–74 and 75–84. In addition, this analysis showed significantly lower levels of satisfaction among those aged 75+ on the ‘coordination and continuity of care’ dimension and for those aged 85+ on the ‘information, communication, and education’ dimension. This analysis of a large sample of people receiving cancer care in Alberta highlights how using a single group for all older adults aged 65+ years can obscure the lower levels of satisfaction among those aged 75–84 and 85+ years, particularly on the ‘coordination and continuity of care’ and ‘information, communication, and education’ dimensions. Older adults are a vastly heterogeneous group. Although chronological age is only a rough proxy for the variation in health and functional status that occurs among older adults , dividing older adults into multiple age groups begins to acknowledge the variation in patient-reported experiences. The dimensions that showed lower satisfaction among older adults are consistent with the existing understanding of age-related concerns. Multimorbidity increases with age, affecting only 13.3% of Canadians aged 20–64 but 32.8% of Canadians aged 65–74, 42.7% of Canadians aged 75–84, and 47.7% of Canadians aged 85+ years . Multimorbidity may contribute to greater physical discomfort during cancer care, affecting the choice and completion of treatment . In addition, the management of multiple morbidities calls for additional coordination of care among multiple medical specialists, primary care providers, and allied healthcare providers. Age-related health, functional, and social changes may also interact with cancer-related changes and require active coordination among cancer care providers and community health or social care services during and after cancer treatment. Furthermore, shifting values related to quality and quantity of life and a lack of research to inform treatment decisions among older adults with cancer can contribute to greater complexity in the treatment decision-making process, calling for greater coordination, as well as intentional information sharing and communication, among healthcare providers, patients, and families/caregivers. Communication challenges among older adults with cancer and their care providers are not new and may be impacted by age-related sensory, cognitive, and functional changes that impact interactions; the involvement of families/caregivers; and/or ageist attitudes among both care providers and patients . Decreased satisfaction among older adults on these dimensions of person-centred care is critical given the potential impact on health outcomes. In addition, the proportion of respondents who did not receive the help that they wanted generally increased with age across all types of issues. The difference across age groups was statistically significant for emotional, financial, social/family, and sexual health issues, and it neared statistical significance for practical and spiritual issues. This finding is consistent with previous studies that have identified unmet needs among older adults diagnosed with cancer , among older adults undergoing active cancer treatment , and among older cancer survivors . The types of unmet needs highlighted, however, vary widely across studies, including medical issues ; informational issues ; practical issues, such as transportation or insurance ; financial issues ; psychological issues ; physical issues ; relational issues ; communication issues ; spiritual issues ; and issues relating to coordination among care providers, including primary care providers . The differences in unmet needs across studies may reflect differences in the measures used, as well as variations in the health system context, specifically related to the available services and resources. Notably, in a previous Canadian study of cancer survivors, researchers also found a high number of older adults expressing concern about sexual issues, with a high proportion reporting that they did not receive the help that they wanted , echoing the high proportion of unmet sexual health issues among older adults in this study. Communication by healthcare providers about sexual side effects has been found to decrease as patient age increases . The use of sexual health assessment tools, and an awareness of the potential impact of cancer and cancer treatment on sexual health, may help to address the unmet needs related to sexual health among older adults with cancer . Finally, the overall rating of the care at cancer centres also showed significant differences across age groups. The pairwise comparisons pointed towards lower levels of satisfaction with the quality of care among those aged 85+ years. Across all these analyses, it is important to note the overall pattern of lower satisfaction and unmet needs among those aged 85+ years. A strength of this study was having a sufficient sample size to detect significant differences for this group. In Alberta, in 2021, the number of cancer diagnoses among those aged 85+ years was about 50% higher than that among those aged 18–39 years, with both groups comprising similar proportions of those attending a cancer centre, 6% for those aged 18–39 years and 5% for those aged 85+ years . Current estimates suggest that the number of Canadians aged 85+ with cancer will more than double (increase by 130%) in the next 20 years . In Alberta, previous AOPSS results have informed the development of programs and services tailored to the needs and concerns of young adults with cancer; the results of this age analysis clearly highlight the need for services and resources tailored to the needs and concerns of older adults with cancer and their families/caregivers. 4.1. Implications Insights from this age analysis can inform the development of services and resources tailored to support older adults with cancer and their families, highlighting which groups to target with various interventions. Interventions and services addressing physical comfort should target older adults aged 65+ years; those addressing coordination and continuity of care would most benefit those aged 75+ years; and tailored information, communication, and education would most benefit those aged 85+ years. Resources to address unmet needs, particularly those related to emotional, financial, social/family, and sexual health issues, should be considered for all older adults receiving cancer care in Alberta. Geriatric assessment and management (GAM) and patient navigation are key interventions to address these areas of concern. GAM is an effective approach to understanding variation, addressing age-related concerns, and improving outcomes in the care of older adults with cancer . Geriatric assessment is the most commonly reported supportive intervention for older people having cancer treatment . In the American Society of Clinical Oncology guidelines, experts recommend GAM for patients aged 65+ with identified vulnerabilities, to inform cancer treatment decision making and supportive interventions to optimize treatment outcomes . Randomized controlled trials have demonstrated that, among older adults receiving cancer treatment, GAM can reduce toxicity and complications; promote treatment completion; improve quality of life and physical function/mobility; increase age-related conversations among oncologists and patients; and improve communication satisfaction for patients and families/caregivers . Therefore, GAM holds evidence-based potential for positive impacts in at least two of the dimensions of person-centred care that showed lower levels of satisfaction among older adults with cancer in Alberta. Navigation is supported by strong evidence for improvements in patient satisfaction with care and quality of life, with emerging evidence for improved communication . Specifically, in Canada, patients treated for cancer who were assigned a nurse navigator reported higher satisfaction across all dimensions of person-centred care on the AOPSS . Among older adults with cancer specifically, a systematic review of navigation also found a positive impact on satisfaction . Within Cancer Care Alberta, the cancer patient navigator role was designed to address concerns related to continuity of care, including informational, management, and relational continuity . New models of cancer care navigation, including generalist navigators in rural settings, and population-specific navigators for Indigenous persons and young adults, have been successfully implemented in Alberta, decreasing emergency visits and hospital admissions and increasing positive care experiences . Therefore, to address the lower levels of satisfaction among older adults identified in this study, particularly with respect to the coordination and continuity of care, opportunities exist to educate generalist navigators about best practices in geriatric oncology and to develop a population-specific navigator for older adults with cancer. The clear involvement of families/caregivers in completing the survey among older adults with cancer highlights the need to include and address family/caregiver concerns in interventions for older adults with cancer . GAM and navigation interventions also show promise for family/caregiver communication and support . 4.2. Future Directions As the AOPSS is a bi-annual survey in Alberta, future analyses may consider longitudinal changes over time related to age differences in satisfaction, supporting the evaluation of interventions addressing age-related concerns. In this study, we used univariate analyses to explore the patterns and significant differences in satisfaction across dimensions of person-centred care, unmet needs, and the overall rating of cancer centre care across age groups. Future research could incorporate multivariate analyses to explore and provide a greater understanding of these relationships. However, given that health records often contain limited information concerning other sociodemographic characteristics and age is readily available, age may remain a valuable proxy to identify those requiring additional support. The respondents for this survey were sampled from people who had received systemic or radiation treatment at a cancer centre in Alberta within the 6 months prior to survey distribution. Among adults aged 75+ years, and particularly among those aged 85+ years, those receiving systemic or radiation treatment may be an increasingly select sub-group of those who have been diagnosed with cancer. To fully understand care satisfaction and unmet needs among older adults with cancer, future research may seek to also understand the experiences of those not receiving active treatment, as well as those with suspected or clinical diagnoses for whom further diagnostic investigations are not pursued, providing a more comprehensive understanding of the supportive services and resources needed. 4.3. Limitations Significantly lower levels of satisfaction were identified among the youngest (18–39 years) and oldest (85+ years) age groups. However, due to missing data and smaller sample sizes limiting the power to detect significant differences for these groups, we chose to use a more liberal and powerful post hoc test for the MANOVA—Fisher’s least significant difference test. This test is not typically recommended because it does not adjust the significance for multiple comparisons, raising the risk of a Type I error . It does, however, decrease the risk of Type II errors, giving a sense of where there may be significant results if more conservative tests were used with larger sample sizes. In addition, although the proportion of unmet needs for several types of issues was highest for those aged 85+ years, we did not find significant differences for this group using the more conservative Bonferroni test for post hoc analysis. Therefore, approaches to increase the number of responses for the oldest and youngest groups and reduce missing data in future studies would strengthen the power and ability to use more conservative statistical analyses. The distribution of the 2021 AOPSS survey coincided with the third wave of COVID-19 in Alberta . During this time, many cancer care visits were conducted virtually, in-person supportive care activities were limited, and the presence of families/caregivers was restricted. Lower satisfaction with cancer care was noted in Alberta during the COVID-19 pandemic ; however, susceptibility to stress for cancer patients during the COVID-19 pandemic was not associated with age . Therefore, it would not be reasonable to attribute the age differences in satisfaction found in this study to the COVID-19 pandemic alone. Age analysis of future patient-reported experience data collected after the COVID-19 pandemic will lead to a greater understanding of the ongoing age differences in care satisfaction. The dataset used for this retrospective analysis did not include information about the health or functional status of respondents beyond self-rated health. Given the vast variation among older adults, challenges related to multimorbidity, activities of daily living, cognitive status, or mood may impact satisfaction with care and unmet needs ; however, data related to these domains are currently limited for older adults with cancer in Alberta. The greater integration of GAM into cancer care could provide opportunities to explore the relationships between care satisfaction and domains of geriatric concern, further informing targeted interventions to strengthen care experiences and outcomes. Patients who experience sensory or cognitive deficits, have lower levels of education, lack the active involvement of their family/caregivers, or have higher levels of physical discomfort or fatigue may also be less able or willing to complete the lengthy AOPSS. In this study, many of these characteristics increased with age. Among older adults, we saw a higher proportion of surveys completed by, or with the help of, someone else and of missing data. Therefore, the respondents who chose, and were able, to complete and return the survey may have been different from those who were unable or chose not to do so, particularly among older adults. In future studies, the greater integration of interviews and/or telephone survey completion may facilitate the involvement of those facing barriers to survey completion , strengthening the representativeness of the results and increasing the responses. As noted, among older adults, there was a higher proportion of AOPSS completed by, or with the help of, someone else. Therefore, the responses in the older age groups may reflect a greater proportion of family/caregiver perspectives, in addition to patient perspectives. Previous studies have found lower levels of satisfaction among families/caregivers as compared to patients in cancer care . These family/caregiver perspectives, however, are also critical in informing quality improvements , suggesting a need for further research that considers both patient and family/caregiver satisfaction. Insights from this age analysis can inform the development of services and resources tailored to support older adults with cancer and their families, highlighting which groups to target with various interventions. Interventions and services addressing physical comfort should target older adults aged 65+ years; those addressing coordination and continuity of care would most benefit those aged 75+ years; and tailored information, communication, and education would most benefit those aged 85+ years. Resources to address unmet needs, particularly those related to emotional, financial, social/family, and sexual health issues, should be considered for all older adults receiving cancer care in Alberta. Geriatric assessment and management (GAM) and patient navigation are key interventions to address these areas of concern. GAM is an effective approach to understanding variation, addressing age-related concerns, and improving outcomes in the care of older adults with cancer . Geriatric assessment is the most commonly reported supportive intervention for older people having cancer treatment . In the American Society of Clinical Oncology guidelines, experts recommend GAM for patients aged 65+ with identified vulnerabilities, to inform cancer treatment decision making and supportive interventions to optimize treatment outcomes . Randomized controlled trials have demonstrated that, among older adults receiving cancer treatment, GAM can reduce toxicity and complications; promote treatment completion; improve quality of life and physical function/mobility; increase age-related conversations among oncologists and patients; and improve communication satisfaction for patients and families/caregivers . Therefore, GAM holds evidence-based potential for positive impacts in at least two of the dimensions of person-centred care that showed lower levels of satisfaction among older adults with cancer in Alberta. Navigation is supported by strong evidence for improvements in patient satisfaction with care and quality of life, with emerging evidence for improved communication . Specifically, in Canada, patients treated for cancer who were assigned a nurse navigator reported higher satisfaction across all dimensions of person-centred care on the AOPSS . Among older adults with cancer specifically, a systematic review of navigation also found a positive impact on satisfaction . Within Cancer Care Alberta, the cancer patient navigator role was designed to address concerns related to continuity of care, including informational, management, and relational continuity . New models of cancer care navigation, including generalist navigators in rural settings, and population-specific navigators for Indigenous persons and young adults, have been successfully implemented in Alberta, decreasing emergency visits and hospital admissions and increasing positive care experiences . Therefore, to address the lower levels of satisfaction among older adults identified in this study, particularly with respect to the coordination and continuity of care, opportunities exist to educate generalist navigators about best practices in geriatric oncology and to develop a population-specific navigator for older adults with cancer. The clear involvement of families/caregivers in completing the survey among older adults with cancer highlights the need to include and address family/caregiver concerns in interventions for older adults with cancer . GAM and navigation interventions also show promise for family/caregiver communication and support . As the AOPSS is a bi-annual survey in Alberta, future analyses may consider longitudinal changes over time related to age differences in satisfaction, supporting the evaluation of interventions addressing age-related concerns. In this study, we used univariate analyses to explore the patterns and significant differences in satisfaction across dimensions of person-centred care, unmet needs, and the overall rating of cancer centre care across age groups. Future research could incorporate multivariate analyses to explore and provide a greater understanding of these relationships. However, given that health records often contain limited information concerning other sociodemographic characteristics and age is readily available, age may remain a valuable proxy to identify those requiring additional support. The respondents for this survey were sampled from people who had received systemic or radiation treatment at a cancer centre in Alberta within the 6 months prior to survey distribution. Among adults aged 75+ years, and particularly among those aged 85+ years, those receiving systemic or radiation treatment may be an increasingly select sub-group of those who have been diagnosed with cancer. To fully understand care satisfaction and unmet needs among older adults with cancer, future research may seek to also understand the experiences of those not receiving active treatment, as well as those with suspected or clinical diagnoses for whom further diagnostic investigations are not pursued, providing a more comprehensive understanding of the supportive services and resources needed. Significantly lower levels of satisfaction were identified among the youngest (18–39 years) and oldest (85+ years) age groups. However, due to missing data and smaller sample sizes limiting the power to detect significant differences for these groups, we chose to use a more liberal and powerful post hoc test for the MANOVA—Fisher’s least significant difference test. This test is not typically recommended because it does not adjust the significance for multiple comparisons, raising the risk of a Type I error . It does, however, decrease the risk of Type II errors, giving a sense of where there may be significant results if more conservative tests were used with larger sample sizes. In addition, although the proportion of unmet needs for several types of issues was highest for those aged 85+ years, we did not find significant differences for this group using the more conservative Bonferroni test for post hoc analysis. Therefore, approaches to increase the number of responses for the oldest and youngest groups and reduce missing data in future studies would strengthen the power and ability to use more conservative statistical analyses. The distribution of the 2021 AOPSS survey coincided with the third wave of COVID-19 in Alberta . During this time, many cancer care visits were conducted virtually, in-person supportive care activities were limited, and the presence of families/caregivers was restricted. Lower satisfaction with cancer care was noted in Alberta during the COVID-19 pandemic ; however, susceptibility to stress for cancer patients during the COVID-19 pandemic was not associated with age . Therefore, it would not be reasonable to attribute the age differences in satisfaction found in this study to the COVID-19 pandemic alone. Age analysis of future patient-reported experience data collected after the COVID-19 pandemic will lead to a greater understanding of the ongoing age differences in care satisfaction. The dataset used for this retrospective analysis did not include information about the health or functional status of respondents beyond self-rated health. Given the vast variation among older adults, challenges related to multimorbidity, activities of daily living, cognitive status, or mood may impact satisfaction with care and unmet needs ; however, data related to these domains are currently limited for older adults with cancer in Alberta. The greater integration of GAM into cancer care could provide opportunities to explore the relationships between care satisfaction and domains of geriatric concern, further informing targeted interventions to strengthen care experiences and outcomes. Patients who experience sensory or cognitive deficits, have lower levels of education, lack the active involvement of their family/caregivers, or have higher levels of physical discomfort or fatigue may also be less able or willing to complete the lengthy AOPSS. In this study, many of these characteristics increased with age. Among older adults, we saw a higher proportion of surveys completed by, or with the help of, someone else and of missing data. Therefore, the respondents who chose, and were able, to complete and return the survey may have been different from those who were unable or chose not to do so, particularly among older adults. In future studies, the greater integration of interviews and/or telephone survey completion may facilitate the involvement of those facing barriers to survey completion , strengthening the representativeness of the results and increasing the responses. As noted, among older adults, there was a higher proportion of AOPSS completed by, or with the help of, someone else. Therefore, the responses in the older age groups may reflect a greater proportion of family/caregiver perspectives, in addition to patient perspectives. Previous studies have found lower levels of satisfaction among families/caregivers as compared to patients in cancer care . These family/caregiver perspectives, however, are also critical in informing quality improvements , suggesting a need for further research that considers both patient and family/caregiver satisfaction. Grouping together all older adults aged 65+ years when analyzing data from patient experience measures can obscure the lower levels of satisfaction among those aged 75+ and 85+ years, resulting in important and nuanced age-related concerns in these older age groups being overlooked. The significantly lower levels of satisfaction among older adults in the dimensions of ‘physical comfort’, ‘coordination and continuity of care’, and ‘information, communication, and education’, as well as increasing unmet needs, significant for emotional, financial, social/family, and sexual health issues, highlight the need for programmatic attention, with tailored services and resources, to address the needs and concerns of older adults with cancer and their families in Alberta.
Overweight among seafarers working on board merchant ships
f91e85ce-1f9e-480b-918c-10f05dbc710b
6327391
Preventive Medicine[mh]
Overweight and obesity are very important issues in different countries, mainly because these conditions are associated with other problems such as cerebrovascular and coronary diseases, and several other causes of death . Obesity and metabolic syndrome are considered risk factors for dementia, and are associated with lower cognitive performance in population-based investigations . Statistical studies conducted in 2014 indicated that 1.9 billion adults worldwide are overweight, and of the latter about 600 million are obese . It is estimated that these statistics will increase in the coming years, especially in the United States and in Europe . The obesity condition develops when there is an excess of nutrients introduced, compared to those consumed, but the contribution of these factors is still quite misunderstood. Many studies have revealed that both the excess of energy intake and the reduction of energy expenditure can determine the onset of obesity . The body requires energy to support physiological functions, and when we introduce calories equal to the amount needed by the body, weight tends to remain the same. Over time people tend to eat and drink more calories than they burn, and excess calories lead to overweight, and progressively to obesity . Obesity is a known risk factor for various diseases and can be considered a multifactorial pathology, due to genetic conditions , endocrine problems , impaired thyroid function, and environmental factors . Obesity is recognized as a cause of physical inability among seafarers. In addition to the influence on health, being overweight may represent a safety issue on board ships. For example, performing emergency operations may be difficult for overweight people, such as taking emergency exits or climbing on a rescue boat. In this regard, it has been reported that fatal accidents are more common in shipping industry than in construction industry and manufacturing industry . This suggests that, due to the difficult working conditions, seafarers should be fit for working on ships, to be able to face the most dangerous situations. Body mass index (BMI) and age are closely associated with work ability . The report “Consultation on Obesity”, published in 1997, by the World Health Organization (WHO) has reported an interesting system for classifying overweight and obesity. BMI system is used internationally and is calculated by dividing the person’s body weight (in kilograms) with the squared height (in meters). Based on the WHO classification, normal conditions are BMI values between 18.5 and 24.9. Values between 25 and 29.9 indicate overweight, and values between 30 and 34.9 indicate an obesity condition. Moreover, values < 18.5 indicate an underweight condition . Seafarers are a population at high risk of developing cardiovascular diseases and cancer . For these workers, the ship is not only the working place, but a real living environment for quite long periods . Many factors such as exposure to chemical substances, smoking, alcohol consumption and obesity increase the risk of developing tumors and cardiovascular diseases . Several studies have shown that obesity and overweight are frequent conditions in seafarers . In this study we have evaluated the prevalence of overweight and obesity by calculating the BMI of seafarers working on Italian flag ships involved in long distance international routes, navigating the seas around the world. The purpose of this study was to investigate whether obesity can be considered as a risk factor for seafarers and to identify suitable corrective strategies. The data of BMI calculated in seafarers, were related to blood glucose and blood pressure levels, and compared to the values of general population of the same countries of the seafarers . This information could give elements on us how life on board ships, characterized by easy access to food, determines a greater prevalence of overweight and obesity in seamen, especially in those coming from countries with more disadvantaged socio-economic conditions . This retrospective study is based on measurements made as a part of occupational medicine examinations that should be compulsory twice a year on Italian flag ships. Data examined for the present study were collected by the Centro Internazionale Radio Medico (CIRM), the Italian Telemedical Maritime Assistance Service (TMAS), in the frame of health surveillance activities performed on board ships. This study has analyzed 1155 medical records, carried out between 2013 and 2016 on seafarers on board of 20 Italian flag ships. All medical data (including identity of seafarers) are stored in the CIRM database and there are not accessible to externals. Data were extracted from the data base by the authors of this study. This study was made according in the frame of project called Health Protection and Safety on Board Ships (acronym: HEALTHY SHIP) . From the medical examination reports, data on seafarers’ nationality, age, height, weight, blood glucose, blood pressure and the results of other basic medical tests were extrapolated. The seafarers receiving anti-hypertensive and hypoglycemic treatment were not considered in the statistical analysis (Table ). BMI values were calculated for each seafarer based on the anthropometric parameters reported in the medical records and classified according to the criteria proposed by WHO . In view of the heterogeneity of ethnicity of seafarers undergoing occupational medicine examinations, data obtained were compared with the national statistics of their respective countries, to assess whether the lifestyle of seafarers could affect their normal weight conditions. The BMI distribution was compared by the Ҳ 2 test assuming data obtained from literature as expected values. The potential correlation between BMI with blood glucose and blood pressure levels was evaluated by comparing the data of these two parameters. The means of different parameters investigated were calculated from single subjects grouped per age or per rank and were expressed as means ± S.E.M. The significance of the differences between the mean values was analyzed by analysis of variance (ANOVA). The correlation between age and physiological parameters were calculated by Pearson’s test. Analysis included 1155 medical records. All seafarers examined were male, aged between 21 and 66 years (mean 39.00 ± 11.38 years). As per nationality, 37% of them were Italians; 29% Indians; 22% from Philippines; 11% Romanians and 1% from other origins. The distribution of the BMI, blood glucose and systolic blood pressure values of seafarers divided for their rank on board Officers or non officers, e.g. crew members), are summarized in Table . Data did not show differences between officers and crew members in all parameters considered. Mean BMI values showed a general tendency to the overweight condition, whereas blood glucose and mean systolic pressure values were in the normal range (Table ). The percentage of subjects whose parameters were beyond normal limits is summarized in Fig. . Over 40% of all subjects examined (officers or non-officers) resulted overweight, and over 10% (10.49% crewmembers and 11.84% officers) were obese. Only 1.22% of crew members and 0.34% of officers resulted underweight (Fig. ). Only 0.52% of subjects examined were diabetic (0.52 crew members, 0.51% officers), and 2.68% (2.45% crew, 2.92% officers) resulted hypertensive. No significant differences were found between the two rank groups considered. The distribution of BMI, and the mean values for age are summarized in Fig. a and b. A direct correlation was found between these two parameters (Table ), with an increase of body weight with age mainly in subjects over 45 years. The same distribution was found both in crew members and officers (data not shown). Blood glucose (Fig. c and d) and systolic blood pressure (Fig. E and F) values were independent from aging as confirmed by Pearson’s correlation (Table ). Analysis of the correlations between age and the physiological parameters investigated by the Pearson’s test, is shown in Table , whereas the correlation between BMI and blood glucose levels and systolic pressure levels is summarized in Table . Blood glucose levels slightly correlated with the BMI values, whereas systolic blood pressure values were independent from BMI (Table ). Data obtained were further analyzed and referred to the nationality of the seafarers. BMI values calculated for seafarers were also compared with those reported in the general population of the same ethnicity (country) groups. The comparison of mean BMI value between seafarers and onshore population of the same nationality (from obtained WHO database) revealed differences for Filipino and Indian populations. Filipino seafarers averaged a BMI of 24.7 whereas the same value in Filipino general population was 22.6. The same is true for our Indian seafarers with a BMI of 25.7 seafarers vs 21.5 onshore. Romanian seafarers had a BMI of 27.2, whereas this value for general population averaged 26.9. For Italians, BMI values were similar to the population of the respective countries (25.8 seafarers vs 26.9 onshore). The prevalence of seafarers in the different weight classes was also compared to the values of the population of the respective countries. As shown in Table , the percentage of overweight and obese subjects was higher in seafarers compared to the general population of the same country (Table ). The Filipino seafarers examined in this study were more overweight (30.4% versus 17.9% of the general population) or obese (7.4% versus 3.0% of the general population) compared to the population of compatriots. 41.2% of Indian seafarers were overweight and 11.8% of them were obese. These values are higher than those of the respective male population, accounting for 8.4% of overweight and only 1.3% of obesity. No significant differences in the values of overweight were noticeable in Italian seafarers compared to the Italian general population (Table ), although among seafarers a slightly higher percentage of obesity compared to general population was noticeable (Table ). In Romanian seafarers, overweight and obesity were similar than in the general population (Table ). The differences in BMI distribution of seafarers compared to those of general population were statistically significant at the Χ 2 test ( p < 0.05) for Filipino, Indian and Romanian seafarers. This study has shown the occurrence of overweight and obesity among seafarers examined on board of Italian flag ships. Moreover, we have observed that male seafarers working on board Italian merchant ships gain excessive weight around the age of 39–45 years, and reach the highest BMI in the group of 55–66 years of age. The present epidemiological analysis was performed on a particular category of workers, the seafarers, whose life styles undergo significant conditioning, due to the fact that they work on board of ship for months. Analysis of the physiological parameters of blood glucose and systolic blood pressure levels did not show a direct correlation with age. Although 2.68% of the subjects was diabetic and 0.52% hypertensive, the non-correlation between body weight increase, blood glucose levels and hypertension, would exclude a condition of metabolic syndrome in the seafarers examined. On the other hand, no significant differences in terms of overweight and obesity were found between officers and crew. In fact, 41.44% of officers and 40.21% of crew members were overweight and 11.84% of officers and 10.49% of crew were obese. Despite the significant number of subjects studied in this work, it was not possible to correlate the results to the educational levels, the socio-economical conditions or to the physical activity, as these data are not considered in occupational medicine screenings of seafarers. Hence, these data were not available in the patients medical records. Comparison of our data on excessive weight among seafarers embarked on Italian ships with the results of other studies (26–29) showed an undesirable weight pattern among seafarers, with a higher tendency to overweight and obesity in this category of workers. The 254 Filipinos seafarers analyzed in this study showed a percentage of overweight and obesity respectively of 30.4% and 7.4%. In the Filipino adult general population, over the age of 20 years, percentages of overweight and obesity were respectively 17.9 and 3.0%. Filipino seafarers with a BMI > 25 kg / m 2 account for the 37.7% of the sample compared to 20.9% of the Philippines general population . Out of the 335 Indian seafarers esamined in the study, 41.2% was overweight and 11.7% obese (Table ). In the Indian adult general population (15–54 years), an 8.4% of overweight and 1.3% of obese individuals were reported. Indian seafarers who have a BMI > 25 kg/m 2 are 52.7%, and therefeore they account for a significantly higher proportion compared to 9.7% of males of general population estimated by a study conducted by the International Institute for Population Science . Italian seafarers with a BMI > 25 kg/m 2 were 40.6% of the total, compared to 39.8% of the Italian population. In contrast, obese Italian seafarers were more compared to the Italian male population . Romanian seafarers with a BMI > 25 kg / m 2 were 58.7% of the total, compared to 53.1% of the Romanian general population. Seafarers of the same nationality with a BMI > 30 were 18.2%, whereas obese individuals in the Romanian male general population averaged the 16.9% . Overweight and underweight values in seafarers compared with those of the general population were higher in people living in lower income countries. A possible explanation of this observation is that the abundance of food on board stimulated exaggerated eating as a sort of rewarding versus a more limited availability in home countries. This excessive eating promotes overweight. This relationship is less obvious for obesity indicating that obesity has more complex causes than the simple overweight that could be promoted just by excessive eating. This hypothesis is supported indirectly by the observations that in Italian seafarers, coming from the country with the highest per capita income among the four analyzed in this work , no relevant differences in overweight and obesity percentages compared with general population were noticeable. Our data show an increased tendency of being overweight and obese among seafarers, compared to the general population of the same ethnicity. This condition may be due to unhealthy lifestyles such as inappropriate diet, lack of fresh food in the diet, consumption of large quantity of sugared tea, coffee and beverages because of their odd working hours and unique lifestyle, lack of physical activity. Unfortunately, no data on diet and physical activities can be obtained from the electronic health records analyzed in this study. This prompted us to develop a specific lifestyle questionnaire that will be administered to seafarers, to obtain useful information about their lifestyle. Other studies confirm the high incidence of overweight and obesity in American, Croatian and Danish seafarers. In this context, the analysis carried out on a group of mariners revealed that about 80% of them is not satisfied with the quality of food available on read board and would like to eat healthier; 20% of them retain food in their cabin and approximately 20% of these seafarers use dietary supplements to overcome dietary gaps. Fresh products are also not often available on board . During sea voyages, seafarers have no choice in terms of quality of food, and meals are influenced by the presence of different ethnicities . Other studies in literature suggest that inappropriate nutrition on board is a widespread problem . A survey based on the eating habits of Chinese seafarers showed a shortage of vitamin C, vitamin B2, vitamin A and calcium based on a daily diet . In general, seafarers have also insufficient levels of physical activity. A study carried out on a Norwegian group of seaman showed that 70% of them practiced physical exercise at home twice a week, whereas only 39% were used to training on board. Moreover, 20% never performed physical exercise on board while only 5% of the sample did not practice sports even at home . On the other hand, with the evolution of marine technology and equipment, onboard work is largely sedentary and requires minimal physical effort . In view of the occurrence of overweight and obesity among seafarers, campaigns for promoting awareness of the phenomenon and on the danger for health of these conditions should be promoted. Specific initiatives to avoid the assumption of junk food, preferring healthy foods, ensuring fresh food supplies, adequate intake of vitamins and mineral salts, and regularizing meals on board should be undertaken. Organization of adequate spaces, times and programs for physical activity on board ships should also be offered for keeping seafarers healthier. Promotion of a correct and healthy lifestyle can reduce the incidence of overweight and related pathologies that likely will appear after the service on board ships (e.g. cardiac and cerebrovascular diseases). Management of overweight-related pathologies in an environment such as a ship at high sea may be difficult, taking into account that merchant ships have no physicians or expert health professionals on board as well as they are provided with limited medical facilities . In view of this, prevention more than treatment of pathologies is the most reasonable strategy to pursue for protecting the health of a category of workers which, in general, receives less health protection than workers ashore.
Interplay between IL6 and CRIM1 in thiopurine intolerance due to hematological toxicity in leukemic patients with wild-type
3b593782-8891-4126-a720-8b558d4d9e44
8102572
Pharmacology[mh]
Despite improvements in combination drug therapy and risk stratification, approximately 20% of pediatric patients with acute lymphoblastic leukemia (ALL) still experience drug resistance and treatment failure due to drug toxicities. In European populations, about 50% of thiopurine-induced cytotoxic adverse reactions, such as severe neutropenia and leukopenia, are explained by NUDT15 and TPMT genetic variants – . The Clinical Pharmacogenetics Implementation Consortium (CPIC) publishes practical guidelines for the implementation of pharmacogenetic (PGx) testing of thiopurine by using traditional star allele-based molecular phenotyping for NUDT15 and TPMT , . According to the established guideline, the thiopurine dose is pharmacogenetically titrated based on the known risk variants of NUDT15 and TPMT . However, a substantial proportion of patients with leukemia presenting no genetic variation in NUDT15 or TPMT still experience life-threatening toxicities, which may result in dose reduction and/or discontinuation of thiopurine, resulting in therapeutic failure and relapse of leukemia. In an attempt to overcome the PGx gap, CRIM1 rs3821169 homozygote has been identified in East Asians as a novel risk variant of thiopurine-induced hematological toxicities . Heterozygotes of the variant have revealed only mild effect on thiopurine toxicity with an unknown clinical impact. However, its high prevalence (T = 0.066, Phase 3 of the 1000 Genomes Project ) and remarkable inter-ethnic variability (Table ) might have severely confounded previous PGx studies assessing thiopurine toxicity. Therefore, investigating PGx interactions of novel genes/variants, other than NUDT15 and TPMT variations, is urgently needed for preventing thiopurine intolerance due to hematological toxicities and improving pediatric ALL care. The categorical nature of the traditional star allele haplotype-based method can complement the quantitative nature of gene-wise variant burden (GVB) method for evaluating the complex interplay of multiple genes/variants . For instance, designating three categories [i.e., poor (PM), intermediate (IM), and normal (NM) metabolizers] per gene creates an exponentially increasing complexity of 3 N for a drug with N -gene PGx interactions. NUDT15 and TPMT have been assigned nine PGx subgroups for thiopurine, which will increase exponentially following new PGx discoveries across different ethnic groups. GVB quantitates the cumulative variant burden of one or more genes into a single score with dimensionality reduction, thus providing a reliable frame for multiple gene-interaction analysis – . In the present study, we aimed to identify novel PGx interactions associated with thiopurine toxicity in pediatric ALL patients carrying both wild-type (WT) NUDT15 and TPMT (and not carrying homozygous CRIM1 rs3821169) by using whole-exome sequencing (WES) technology. Our investigation of the effect of novel candidate PGx variants on the last-cycle 6-mercaptopurine (6-MP) dose intensity percentage (DIP) tolerated by pediatric patients with ALL, revealed clinically significant hematological toxicities and thiopurine intolerance. Our results provide not only the measures of clinical validity but also the measures of population impact (or clinical utility), including relative risk (RR), population attributable fraction (PAF), number needed to treat (NNT), and number needed to genotype (NNG) , for preventing thiopurine toxicity. Subjects As described in our previous study, we recruited 320 Korean pediatric patients with ALL, who underwent maintenance therapy with 6-MP at three teaching hospitals, Seoul National University Hospital (SNUH), Asan Medical Center (AMC), and Samsung Seoul Medical Center (SMC), located in Seoul, South Korea. All subjects conformed with the exclusion criteria (i.e., relapse of the disease, stem cell transplantation, Burkitt’s lymphoma, mixed phenotype acute leukemia, infant ALL, or very high-risk of ALL) . Patients were assigned to the standard-risk group if they were 1–9 years of age at the time of diagnosis with a white blood cell (WBC) count less than 50 × 10 9 /L; all other patients were assigned to the high-risk group. Patients underwent hematopoietic stem cell transplantation if they met one or more of the following criteria: age younger than 1 year, hypodiploidy, the presence of t(9;22), a WBC count equal to or greater than 200 × 10 9 /L, or the 11q23 rearrangement . Patients allocated to the standard-risk group were treated with Children’s Cancer Group (CCG)-1891 , CCG-1952 or Children’s Oncology Group (COG) AALL-0331 regimens . In high-risk groups, CCG-1882 , 0601, or 1501 protocols for Korean multicenter studies were employed. In Korea, the planned dose of 6-MP was modified from 75 to 50 mg/m 2 , as several patients who had been administered the same dose under the original Western protocol exhibited moderate to severe toxicities during 6-MP administration , . The 6-MP doses during maintenance therapy were adjusted to maintain a WBC count of 2.0–3.5 × 10 9 /L, with an absolute neutrophil count (ANC) of over 500/μL. Hepatotoxicity-related dose modifications were primarily based on the COG guidelines; however, they were also performed at the discretion of the treating physician as this study was not undertaken per the uniform prospective protocols. Hematological toxicity as the clinical endpoint was estimated by the tolerated last-cycle 6-MP DIP . The percentage of the actual prescribed amount to the planned dose (50 mg/m 2 ) was defined as the last-cycle 6-MP DIP using the recorded 6-MP dose per meter body surface area over the last-cycle (12-week) of maintenance. Doses employed for the last maintenance cycle were considered, as dose modification of 6-MP was mainly adopted during the early phase of maintenance. Further detailed descriptions of patients and measurements have been summarized in our previous study , , . The present study was approved by the SNUH, AMC, and SMC Institutional Review Boards. Written informed consent was obtained from each participant. For the participants under the age of 18 years, informed consent was obtained from a parent and/or legal guardian. All experiments and methods were performed in accordance with the relevant guidelines and regulations. Whole-exome sequencing and pharmacogenomic subgrouping WES data were obtained for pediatric patients with ALL patients and analyzed in a bioinformatics pipeline as previously described , , . CPIC provides major PGx genes with haplotype definitions and molecular function annotations based on star nomenclature. We classified patients with ALL into PM, IM, and NM groups for each gene, NUDT15 and TPMT , according to CPIC classifications , . For both genes, we considered NMs as WTs. As in our previous study, data regarding DIP and the relative frequency of neutropenia (ANC < 500 μL) was available for the discovery cohort (N = 244) . The relative frequency of neutropenia was defined by the ratio of frequencies of complete blood cell counts (CBC) with neutropenia from among the total tested CBCs. However, data regarding the frequency of neutropenia was not collected for the replication cohort (N = 76) at the time of this analysis. Therefore, we conducted a variant selection process using the discovery cohort. First, we performed multivariate linear regression, adjusting age, sex, and body surface area. Of 14,931 genes with GVB scores, 10 genes were identified with statistical significance at both DIP and the relative frequency of ANC < 500 μL. Next, we identified 45 variants with SIFT (sorting intolerant from tolerant) scores from among 156 variants of genes that passed multivariate linear regression with GVB score. Among the 45 variants, 3 variants passed the multivariate linear regression cutoff for SNPs . Finally, we selected IL6 rs13306435 as a novel candidate, which was only a missense variant among 3 variants (Fig. , Supplementary Table ). We observed that no star name had been designated to the novel (or candidate) PGx genes. Thus, for the purpose of the present study, we defined non-carriers of CRIM1 rs3821169 and IL6 rs13306435 as WT carriers (WTs) of CRIM1 and IL6 , respectively (Table ). Haplotypes were determined using PHASE 2.1.1 , . Gene-wise variant burden for evaluating single- and multi-gene effects GVB analysis was performed to evaluate the aggregated impact of both common and rare variants , . For each individual, the GVB of a coding gene was defined as the geometric mean of the SIFT scores of the coding variants (with SIFT score < 0.7) in the coding gene, where GVB G denotes the GVB score of gene G [range 0.0–1.0]. The more deleterious the variant burden, the lower the score. First, we included NUDT15 and TPMT in GVB analysis because these genes are clinically recognized to be related to thiopurine-induced toxiticy and have clinical guidelines like CPIC guidelines. Additionally, we included CRIM1 rs3821169, which was identified in our previous study to define conditional GVB . Finally, we included IL6 rs13306435 following the selection process stated above. The multi-gene effect was evaluated by defining GVB A,B,C as the geometric mean of GVB A , GVB B and GVB C [range 0.0–1.0]. Gene-variant interaction was considered by defining conditional GVB G^ ( variant ) as the GVB score of gene G , depending on the presence or absence of the specified variant . For example, GVB CRIM1^(rs13306435) equals GVB CRIM1 when rs13306435 is present, vanishing to a WT score of 1.0 when absent. Inter-ethnic variability of allele frequencies and molecular phenotypes Using the 2504 whole-genome sequences with multiple ethnicities provided by the 1000 Genomes Project phase 3 , we investigated inter-ethnic distributions of PGx alleles and haplotypes, along with their molecular phenotypes associated with thiopurine intolerance due to hematological toxicities (Table ). Statistical analysis The last-cycle 6-MP DIPs according to different PGx groups were assessed using Student’s t test or one-way ANOVA with posthoc Tukey test. Multiple linear regression was also applied to adjust for confounding clinical variables. The powers of GVB NUDT15 , GVB TPMT , GVB CRIM1 , and GVB IL6 , and their combinations for predicting 6-MP DIPs, were systematically evaluated by analyzing ROC (receiver operating characteristic) curves across eight different DIP cutoffs (i.e., 10%, 15%, 25%, 35%, 45%, 60%, 80%, and 100%) in terms of AUCs (areas under the ROC curves) (Figs. , ). An ROC curve is a two-dimensional depiction of classification performance integrating all sensitivity and specificity values at all cutoff levels . All statistical analyses were performed using the R statistical package (version 3.5.1). R package ‘pROC’ was used for calculating AUC values . The optimal cutoff for the GVB score was determined by maximizing Youden’s index . GVB CRIM1^ ( rs3821169 *) was applied to control the potential confounding effect of the impressively high carrier frequency in East Asians [43.7% (= 220/504)], compared with other ethnicities (0.2–9.4%), and the mild effect of heterozygous expression on thiopurine intolerance attributed to hematological toxicities. GVB CRIM1^ ( rs3821169 *) denotes a conditional GVB score of CRIM1 dependent on the presence or absence of homozygous rs3821169 variant (denoted as rs3821169* ). It equals GVB CRIM1 when the subject carries homozygous rs3821169 variant and otherwise vanishes to 1.0. Clinical validity parameters We calculated and assessed clinical validity for each statistical parameter as follows. Positive predictive value (PPV) implies the probability of an event when the genetic variant is present. In contrast, negative predictive value (NPV) means the probability of no event when the genetic variant is absent. NNT is the inverse of the absolute of intervention, that is, the difference between the proportion of events in the control group and the proportion of events in the case group, which can be written as. 1 [12pt]{minimal} $$_{c}-{P}_{i})}.$$ 1 ( P c - P i ) . If NNT is 20, it implies that 20 patients are needed to prevent an event like death or an adverse effect. NNG is the number of patients who must be genotyped to avoid one patient from experiencing an adverse event, which can be predicted based on following formula 2 [12pt]{minimal} $$_{c}+{P}_{i})}$$ NNT ( P c + P i ) For example, an NNG of 33 means that one adverse event was avoided for every 33 patients genotyped. RR is the relative ratio of the proportion of events in the control group and the proportion of events in the case group, which is calculated by following formula. 3 [12pt]{minimal} $$_{i}}{{P}_{c}}.$$ P i P c . Odds ratio (OR) is calculated based on the comparison of the relative odds of an event in each group, which can be determined as 4 [12pt]{minimal} $$_{i}}{(1- {P}_{i})}}{_{c}}{(1- {P}_{c})}}.$$ P i ( 1 - P i ) P c ( 1 - P c ) . In the above Eqs. , [12pt]{minimal} $${P}_{c}$$ P c is the proportion of events in the control group and [12pt]{minimal} $${P}_{i}$$ P i is the proportion of events in the case group. PAF is the proportion of events that would be eliminated from the population if exposure to the risk factor were eliminated, which can be assessed as 5 [12pt]{minimal} $$.$$ P Y = 1 - P ( Y = 1 | X = 0 ) P ( Y = 1 ) . In Eq. , Y is an event development and X is a binary risk factor . As described in our previous study, we recruited 320 Korean pediatric patients with ALL, who underwent maintenance therapy with 6-MP at three teaching hospitals, Seoul National University Hospital (SNUH), Asan Medical Center (AMC), and Samsung Seoul Medical Center (SMC), located in Seoul, South Korea. All subjects conformed with the exclusion criteria (i.e., relapse of the disease, stem cell transplantation, Burkitt’s lymphoma, mixed phenotype acute leukemia, infant ALL, or very high-risk of ALL) . Patients were assigned to the standard-risk group if they were 1–9 years of age at the time of diagnosis with a white blood cell (WBC) count less than 50 × 10 9 /L; all other patients were assigned to the high-risk group. Patients underwent hematopoietic stem cell transplantation if they met one or more of the following criteria: age younger than 1 year, hypodiploidy, the presence of t(9;22), a WBC count equal to or greater than 200 × 10 9 /L, or the 11q23 rearrangement . Patients allocated to the standard-risk group were treated with Children’s Cancer Group (CCG)-1891 , CCG-1952 or Children’s Oncology Group (COG) AALL-0331 regimens . In high-risk groups, CCG-1882 , 0601, or 1501 protocols for Korean multicenter studies were employed. In Korea, the planned dose of 6-MP was modified from 75 to 50 mg/m 2 , as several patients who had been administered the same dose under the original Western protocol exhibited moderate to severe toxicities during 6-MP administration , . The 6-MP doses during maintenance therapy were adjusted to maintain a WBC count of 2.0–3.5 × 10 9 /L, with an absolute neutrophil count (ANC) of over 500/μL. Hepatotoxicity-related dose modifications were primarily based on the COG guidelines; however, they were also performed at the discretion of the treating physician as this study was not undertaken per the uniform prospective protocols. Hematological toxicity as the clinical endpoint was estimated by the tolerated last-cycle 6-MP DIP . The percentage of the actual prescribed amount to the planned dose (50 mg/m 2 ) was defined as the last-cycle 6-MP DIP using the recorded 6-MP dose per meter body surface area over the last-cycle (12-week) of maintenance. Doses employed for the last maintenance cycle were considered, as dose modification of 6-MP was mainly adopted during the early phase of maintenance. Further detailed descriptions of patients and measurements have been summarized in our previous study , , . The present study was approved by the SNUH, AMC, and SMC Institutional Review Boards. Written informed consent was obtained from each participant. For the participants under the age of 18 years, informed consent was obtained from a parent and/or legal guardian. All experiments and methods were performed in accordance with the relevant guidelines and regulations. WES data were obtained for pediatric patients with ALL patients and analyzed in a bioinformatics pipeline as previously described , , . CPIC provides major PGx genes with haplotype definitions and molecular function annotations based on star nomenclature. We classified patients with ALL into PM, IM, and NM groups for each gene, NUDT15 and TPMT , according to CPIC classifications , . For both genes, we considered NMs as WTs. As in our previous study, data regarding DIP and the relative frequency of neutropenia (ANC < 500 μL) was available for the discovery cohort (N = 244) . The relative frequency of neutropenia was defined by the ratio of frequencies of complete blood cell counts (CBC) with neutropenia from among the total tested CBCs. However, data regarding the frequency of neutropenia was not collected for the replication cohort (N = 76) at the time of this analysis. Therefore, we conducted a variant selection process using the discovery cohort. First, we performed multivariate linear regression, adjusting age, sex, and body surface area. Of 14,931 genes with GVB scores, 10 genes were identified with statistical significance at both DIP and the relative frequency of ANC < 500 μL. Next, we identified 45 variants with SIFT (sorting intolerant from tolerant) scores from among 156 variants of genes that passed multivariate linear regression with GVB score. Among the 45 variants, 3 variants passed the multivariate linear regression cutoff for SNPs . Finally, we selected IL6 rs13306435 as a novel candidate, which was only a missense variant among 3 variants (Fig. , Supplementary Table ). We observed that no star name had been designated to the novel (or candidate) PGx genes. Thus, for the purpose of the present study, we defined non-carriers of CRIM1 rs3821169 and IL6 rs13306435 as WT carriers (WTs) of CRIM1 and IL6 , respectively (Table ). Haplotypes were determined using PHASE 2.1.1 , . GVB analysis was performed to evaluate the aggregated impact of both common and rare variants , . For each individual, the GVB of a coding gene was defined as the geometric mean of the SIFT scores of the coding variants (with SIFT score < 0.7) in the coding gene, where GVB G denotes the GVB score of gene G [range 0.0–1.0]. The more deleterious the variant burden, the lower the score. First, we included NUDT15 and TPMT in GVB analysis because these genes are clinically recognized to be related to thiopurine-induced toxiticy and have clinical guidelines like CPIC guidelines. Additionally, we included CRIM1 rs3821169, which was identified in our previous study to define conditional GVB . Finally, we included IL6 rs13306435 following the selection process stated above. The multi-gene effect was evaluated by defining GVB A,B,C as the geometric mean of GVB A , GVB B and GVB C [range 0.0–1.0]. Gene-variant interaction was considered by defining conditional GVB G^ ( variant ) as the GVB score of gene G , depending on the presence or absence of the specified variant . For example, GVB CRIM1^(rs13306435) equals GVB CRIM1 when rs13306435 is present, vanishing to a WT score of 1.0 when absent. Using the 2504 whole-genome sequences with multiple ethnicities provided by the 1000 Genomes Project phase 3 , we investigated inter-ethnic distributions of PGx alleles and haplotypes, along with their molecular phenotypes associated with thiopurine intolerance due to hematological toxicities (Table ). The last-cycle 6-MP DIPs according to different PGx groups were assessed using Student’s t test or one-way ANOVA with posthoc Tukey test. Multiple linear regression was also applied to adjust for confounding clinical variables. The powers of GVB NUDT15 , GVB TPMT , GVB CRIM1 , and GVB IL6 , and their combinations for predicting 6-MP DIPs, were systematically evaluated by analyzing ROC (receiver operating characteristic) curves across eight different DIP cutoffs (i.e., 10%, 15%, 25%, 35%, 45%, 60%, 80%, and 100%) in terms of AUCs (areas under the ROC curves) (Figs. , ). An ROC curve is a two-dimensional depiction of classification performance integrating all sensitivity and specificity values at all cutoff levels . All statistical analyses were performed using the R statistical package (version 3.5.1). R package ‘pROC’ was used for calculating AUC values . The optimal cutoff for the GVB score was determined by maximizing Youden’s index . GVB CRIM1^ ( rs3821169 *) was applied to control the potential confounding effect of the impressively high carrier frequency in East Asians [43.7% (= 220/504)], compared with other ethnicities (0.2–9.4%), and the mild effect of heterozygous expression on thiopurine intolerance attributed to hematological toxicities. GVB CRIM1^ ( rs3821169 *) denotes a conditional GVB score of CRIM1 dependent on the presence or absence of homozygous rs3821169 variant (denoted as rs3821169* ). It equals GVB CRIM1 when the subject carries homozygous rs3821169 variant and otherwise vanishes to 1.0. We calculated and assessed clinical validity for each statistical parameter as follows. Positive predictive value (PPV) implies the probability of an event when the genetic variant is present. In contrast, negative predictive value (NPV) means the probability of no event when the genetic variant is absent. NNT is the inverse of the absolute of intervention, that is, the difference between the proportion of events in the control group and the proportion of events in the case group, which can be written as. 1 [12pt]{minimal} $$_{c}-{P}_{i})}.$$ 1 ( P c - P i ) . If NNT is 20, it implies that 20 patients are needed to prevent an event like death or an adverse effect. NNG is the number of patients who must be genotyped to avoid one patient from experiencing an adverse event, which can be predicted based on following formula 2 [12pt]{minimal} $$_{c}+{P}_{i})}$$ NNT ( P c + P i ) For example, an NNG of 33 means that one adverse event was avoided for every 33 patients genotyped. RR is the relative ratio of the proportion of events in the control group and the proportion of events in the case group, which is calculated by following formula. 3 [12pt]{minimal} $$_{i}}{{P}_{c}}.$$ P i P c . Odds ratio (OR) is calculated based on the comparison of the relative odds of an event in each group, which can be determined as 4 [12pt]{minimal} $$_{i}}{(1- {P}_{i})}}{_{c}}{(1- {P}_{c})}}.$$ P i ( 1 - P i ) P c ( 1 - P c ) . In the above Eqs. , [12pt]{minimal} $${P}_{c}$$ P c is the proportion of events in the control group and [12pt]{minimal} $${P}_{i}$$ P i is the proportion of events in the case group. PAF is the proportion of events that would be eliminated from the population if exposure to the risk factor were eliminated, which can be assessed as 5 [12pt]{minimal} $$.$$ P Y = 1 - P ( Y = 1 | X = 0 ) P ( Y = 1 ) . In Eq. , Y is an event development and X is a binary risk factor . IL6 rs13306435 as a novel pharmacogenetic variant for thiopurine intolerance due to hematological toxicities We classified patients into three groups according to the variant status of NUDT15 and TPMT to identify new variants not confounded by the two most critical PGx genes associated with thiopurine intolerance. Table describes the clinical characteristics of 320 pediatric patients with ALL according to their PGx subgroups, presenting 80 patients who were non-WTs (i.e., IMs or PMs) of NUDT15 and/or TPMT ( N = 80), 115 patients who were all WTs (WT carriers of all the four genes), and 125 who were WTs of both genes, NUDT15 and TPMT (both WTs) and carried CRIM1 rs3821169 and/or IL6 rs13306435 variants. Of the 125 patients with WT characterization for both NUDT15 and TPMT , 94, 12, 11, and 8 patients belonged to the heterozygous CRIM1 , heterozygous IL6 , homozygous CRIM1 , and IL6 and CRIM1 variant groups, respectively (Table ).We used patients with all WTs ( N = 115) as a control group for the following analysis. The average of the tolerated 6-MP DIPs of non-WTs for NUDT15 (47.1 ± 30.5%, N = 72) and/or for TPMT (56.6 ± 33.6%, N = 9) were significantly lower than that of all WTs (71.3 ± 29.6%, N = 115) ( p < 0.001, Table ). The patients with homozygous CRIM1 (dark blue circle in Fig. ) tolerated significantly lower 6-MP DIP than the patients with all WTs before ( N = 16, 44.6 ± 35.2%) or after ( N = 11, 42.3 ± 35.0%) controlling the five subjects with NUDT15 (59.76 ± 37.24%) or IL6 (9.77%) variants. To rule out the PGx effect of NUDT15 , TPMT , and homozygous CRIM1 on thiopurine intolerance, we extracted 228 samples of non-carriers for these variants for the further discovery of novel PGx variants. We observed that carriers of IL6 rs13306435 ( N = 19, 48.0 ± 27.3%) exhibited significantly lower 6-MP DIPs than non-carriers ( N = 209, 69.9 ± 29.0%), as evaluated by Student’s t test ( p = 0.0016) and multiple covariate linear regression ( p = 0.0028). Furthermore, of the 19 carriers, 7 patients with both IL6 rs13306435 and CRIM1 variants demonstrated significantly lower 6-MP intolerance, with a DIP of 24.7 ± 8.9% when compared with the DIP of 12 patients harboring only IL6 rs13306435 variant (61.6 ± 25.1%; orange circle in Fig. ). The potential interplay between IL6 and CRIM1 variants was suggested, which was further supported by the finding that seven patients with both IL6 and CRIM1 variants showed significantly lower 6-MP DIPs (24.7 ± 8.9%) than 94 heterozygous CRIM1 carriers (68.1 ± 28.4%; light blue circle in Fig. ). Interplay of IL6 and CRIM1 variants in thiopurine toxicity Figure exhibits the distributions of the last-cycle 6-MP DIPs of 115 all WTs (Fig. a), carriers of only heterozygous CRIM1 ( N = 94, Fig. b), carriers of only heterozygous IL6 ( N = 12, Fig. c), carriers of only homozygous CRIM1 ( N = 11, Fig. d), and carriers of both IL6 and CRIM1 variants ( N = 8, Fig. e). Homozygous CRIM1 and IL6 and CRIM1 groups showed significantly lower 6-MP DIPs (44.6 ± 35.2% and 24.7 ± 8.9%, respectively, Fig. d,e) than all WTs and heterozygous CRIM1 groups (71.3 ± 29.6% and 68.1 ± 28.4%, respectively, Figs a,b) by one-way ANOVA ( p = 0.0001; adj. p < 0.05 posthoc Tukey). Furthermore, the IL6 and CRIM1 group showed significantly lower 6-MP DIPs (44.6 ± 35.2%) than the heterozygous IL6 group (61.6 ± 25.1%; adj. p < 0.05, posthoc Tukey) (Fig. c,e). All 10 patients with both IL6 and CRIM1 variants with any NUDT15 or TPMT status (red numbers in Fig. ) exhibited the lowest DIPs (9.77–32.68%) among all subgroups of the whole PGx groups. Thus, a significant interplay between IL6 and CRIM1 in thiopurine intolerance was suggested. Notably, it was more clinically relevant to evaluate the magnitude of the actual decrease in 6-MP DIP tolerated by patients than the mere statistical significance affected by the study sample size and biomarker prevalence. Table shows that more than one-quarter of the patients with homozygous CRIM1 (36.4%) and IL6 and CRIM1 (50.0%) tolerated less than 25% of the planned DIP, increasing the risk of thiopurine therapeutic failure. The DIPs of our cohorts were comparable with those of the recommended 6-MP doses published in the current CPIC guideline when NUDT15 or TPMT variants were involved . Furthermore, when we raised the DIP cutoff from 25 to 35%, the proportions of homozygous CRIM1 and IL6 and CRIM1 groups increased to 54.6% (6/11) and 87.5% (7/8), respectively, which far exceeded 38.9% and 33.3% of NUDT15 (28/72) and TPMT (3/9) non-WTs, respectively. Notably, only 6.1% (7/115) and 10.5% (12/115) of all WTs tolerated less than 25% and 35% of the planed DIP (Table ). Inter-ethnic variabilities in carrier frequencies and molecular phenotypes Both NUDT15 and TPMT show wide inter-ethnic variabilities. Table exhibits inter-ethnic variabilities of the PGx variants and molecular phenotypes of the four thiopurine pharmacogenes computed from among 2504 subjects of the 1000 Genomes Project . NUDT15 non-WT (i.e., IM or PM) is common in East (22.6%) and South (13.9%) Asians but rare in Europeans and Africans (< 1%). In contrast, TPMT non-WT is common in Europeans (8.0%) and Americans (13.3%) but relatively rare in Asians (< 5.0%). Novel PGx variant, CRIM1 rs3821169, demonstrates remarkably high minor allele frequency (T = 0.255) and carrier prevalence (43.7%, 220/504) in East Asians. Table also shows that 6.5% of East Asians harbor homozygous CRIM1 rs3821169 variant, which can hardly be detected in other populations (< 1.0%). In contrast, IL6 rs13306435 is widely distributed with the highest carrier frequency of 15.0% in Americans and 3.0% among Asian and European populations; It is rare in South Asian and African populations (< 1.0%). The carrier frequencies of both IL6 and CRIM1 variants were 2.0% and 1.2% for East Asian and American populations, respectively. Single- and multi-gene prediction performances of IL6 and CRIM1 We performed ROC analysis of GVB-based single- and multi-gene models to predict the last-cycle 6-MP DIPs using 240 both WTs for NUDT15 and TPMT to control their long-known PGx effects. Figure demonstrates that (b) GVB CRIM1 outperformed (a) GVB IL6 in predicting DIPs at all cutoff levels, probably due to the higher variant frequency of CRIM1 over IL6 in the study population. Two-gene model GVB IL6,CRIM1 (Fig. c) consistently outperformed each of the single-gene models (GVB IL6 and GVB CRIM1 ) at all cutoffs. For a comprehensive evaluation of all PGx interactions among NUDT15 , TPMT, IL6 , and CRIM1 , we performed comprehensive ROC analysis using data of all 320 pediatric patients with ALL (Fig. ). Among the four single-gene models in Fig. a,b,d), GVB NUDT15 outperformed others at all cutoffs, probably due to the high prevalence of NUDT15 variants and the strong metabolic impact of on thiopurine toxicity. Two-gene models (Fig. c,f) consistently outperformed each of the corresponding single-gene counterparts, i.e., the order of their AUCs was GVB NUDT15,TPMT > GVB NUT15 > GVB TPMT and of GVB IL6,CRIM1 > GVB CRIM1 > GVB IL6 at all cutoff levels. Three-gene models created by adding IL6 or CRIM1 to the traditional NUDT15 and TPMT model also consistently improved the prediction accuracy (Fig. g,h). The final four-gene model in Fig. (i) outperformed all other models in predicting DIPs at all cutoff levels. Moreover, it is worth noting that the ROC curves across eight DIP cutoffs in Fig. exhibited ‘dose–response relationships’, i.e., GVB score’s prediction power (measured by AUC) increases as a function of the severity of thiopurine intolerance (measured by DIP). That is, the final four-gene model’s AUC increased as a function of decreasing DIP (i.e., AUC <15% = 0.757, AUC <25% = 0.748, AUC <35% = 0.711, AUC <45% = 0.716, AUC <60% = 0.646, and AUC <80% = 0.592 in a descending order, Fig. i). Evaluation of the clinical validity and utility of the star allele and GVB methods We systematically compared the clinical utility as well as clinical validity of traditional star allele-based and GVB-based methods for preventing thiopurine toxicity. Table demonstrates the measures of clinical validity and potential population impact along with the pharmacogenetic association of the different prediction models . Because the designated star alleles for IL6 or CRIM1 were not available, star allele-based molecular phenotyping was not applicable for these novel genes. GVB NUDT15,TPMT slightly outperformed STAR NUDT15,TPMT , the classical star allele-based molecular phenotyping (Table a,b). Three-gene models (i.e., GVB NUDT15,TPMT,IL6 and GVB NUDT15,TPMT,CRIM1* also outperformed the two-gene models (Table c,d). The four-gene interplay model, GVB NUDT15,TPMT,IL6,CRIM1 ^( CRIM1,IL6 ) , presented the best performance for all the eight measures of clinical validity and potential population impact (except specificity) (marked in bold numbers in Table g). The addition of IL6 and CRIM1 to create the final four-gene model by integrating both common and rare alleles markedly improved PAF from 0.36 to 0.58 as well as RR (3.29–5.73) and OR (4.21–8.06). PAF is the proportion of events attributed to the PGx risk factor or the maximum percentage of cases that can be prevented if individuals who test positive for the PGx variants receive different treatments. In the group of all the patients with 6-MP toxicities (DIP < 25%), we could expect eight patients more with the GVB-based model than with the traditional star allele-based method [23 patients in Table a vs . 31 patients in Table f]. NNG is the number of patients that must be genotyped to prevent one patient from experiencing an adverse event. The NNG of 20 and an NNT of 5 of the traditional STAR NUDT15,TPMT showed that 5 out of every 20 patients genotyped would be found to have positive test results for genotyping and would need alternative treatment to prevent toxicity related 6-MP intolerance in one patient (DIP < 25%). Adding IL6 and CRIM1 to the traditional NUDT15 and TPMT testing to create GVB NUDT15,TPMT,IL6,CRIM1 ^( CRIM1,IL6 ) may require only 12.5 patients (37.5% improvement of NNG) to be genotyped to return 3.7 test-positive patients (26.0% improvement of NNT) receiving alternative treatment to prevent adverse event in one patient (Table g). Ethics approval and consent to participate Informed written consent was obtained from all subjects, and the study was approved by the ethics committees of Asan Medical Center, Seoul National University Hospital, and Samsung Medical Center. rs13306435 as a novel pharmacogenetic variant for thiopurine intolerance due to hematological toxicities We classified patients into three groups according to the variant status of NUDT15 and TPMT to identify new variants not confounded by the two most critical PGx genes associated with thiopurine intolerance. Table describes the clinical characteristics of 320 pediatric patients with ALL according to their PGx subgroups, presenting 80 patients who were non-WTs (i.e., IMs or PMs) of NUDT15 and/or TPMT ( N = 80), 115 patients who were all WTs (WT carriers of all the four genes), and 125 who were WTs of both genes, NUDT15 and TPMT (both WTs) and carried CRIM1 rs3821169 and/or IL6 rs13306435 variants. Of the 125 patients with WT characterization for both NUDT15 and TPMT , 94, 12, 11, and 8 patients belonged to the heterozygous CRIM1 , heterozygous IL6 , homozygous CRIM1 , and IL6 and CRIM1 variant groups, respectively (Table ).We used patients with all WTs ( N = 115) as a control group for the following analysis. The average of the tolerated 6-MP DIPs of non-WTs for NUDT15 (47.1 ± 30.5%, N = 72) and/or for TPMT (56.6 ± 33.6%, N = 9) were significantly lower than that of all WTs (71.3 ± 29.6%, N = 115) ( p < 0.001, Table ). The patients with homozygous CRIM1 (dark blue circle in Fig. ) tolerated significantly lower 6-MP DIP than the patients with all WTs before ( N = 16, 44.6 ± 35.2%) or after ( N = 11, 42.3 ± 35.0%) controlling the five subjects with NUDT15 (59.76 ± 37.24%) or IL6 (9.77%) variants. To rule out the PGx effect of NUDT15 , TPMT , and homozygous CRIM1 on thiopurine intolerance, we extracted 228 samples of non-carriers for these variants for the further discovery of novel PGx variants. We observed that carriers of IL6 rs13306435 ( N = 19, 48.0 ± 27.3%) exhibited significantly lower 6-MP DIPs than non-carriers ( N = 209, 69.9 ± 29.0%), as evaluated by Student’s t test ( p = 0.0016) and multiple covariate linear regression ( p = 0.0028). Furthermore, of the 19 carriers, 7 patients with both IL6 rs13306435 and CRIM1 variants demonstrated significantly lower 6-MP intolerance, with a DIP of 24.7 ± 8.9% when compared with the DIP of 12 patients harboring only IL6 rs13306435 variant (61.6 ± 25.1%; orange circle in Fig. ). The potential interplay between IL6 and CRIM1 variants was suggested, which was further supported by the finding that seven patients with both IL6 and CRIM1 variants showed significantly lower 6-MP DIPs (24.7 ± 8.9%) than 94 heterozygous CRIM1 carriers (68.1 ± 28.4%; light blue circle in Fig. ). IL6 and CRIM1 variants in thiopurine toxicity Figure exhibits the distributions of the last-cycle 6-MP DIPs of 115 all WTs (Fig. a), carriers of only heterozygous CRIM1 ( N = 94, Fig. b), carriers of only heterozygous IL6 ( N = 12, Fig. c), carriers of only homozygous CRIM1 ( N = 11, Fig. d), and carriers of both IL6 and CRIM1 variants ( N = 8, Fig. e). Homozygous CRIM1 and IL6 and CRIM1 groups showed significantly lower 6-MP DIPs (44.6 ± 35.2% and 24.7 ± 8.9%, respectively, Fig. d,e) than all WTs and heterozygous CRIM1 groups (71.3 ± 29.6% and 68.1 ± 28.4%, respectively, Figs a,b) by one-way ANOVA ( p = 0.0001; adj. p < 0.05 posthoc Tukey). Furthermore, the IL6 and CRIM1 group showed significantly lower 6-MP DIPs (44.6 ± 35.2%) than the heterozygous IL6 group (61.6 ± 25.1%; adj. p < 0.05, posthoc Tukey) (Fig. c,e). All 10 patients with both IL6 and CRIM1 variants with any NUDT15 or TPMT status (red numbers in Fig. ) exhibited the lowest DIPs (9.77–32.68%) among all subgroups of the whole PGx groups. Thus, a significant interplay between IL6 and CRIM1 in thiopurine intolerance was suggested. Notably, it was more clinically relevant to evaluate the magnitude of the actual decrease in 6-MP DIP tolerated by patients than the mere statistical significance affected by the study sample size and biomarker prevalence. Table shows that more than one-quarter of the patients with homozygous CRIM1 (36.4%) and IL6 and CRIM1 (50.0%) tolerated less than 25% of the planned DIP, increasing the risk of thiopurine therapeutic failure. The DIPs of our cohorts were comparable with those of the recommended 6-MP doses published in the current CPIC guideline when NUDT15 or TPMT variants were involved . Furthermore, when we raised the DIP cutoff from 25 to 35%, the proportions of homozygous CRIM1 and IL6 and CRIM1 groups increased to 54.6% (6/11) and 87.5% (7/8), respectively, which far exceeded 38.9% and 33.3% of NUDT15 (28/72) and TPMT (3/9) non-WTs, respectively. Notably, only 6.1% (7/115) and 10.5% (12/115) of all WTs tolerated less than 25% and 35% of the planed DIP (Table ). Both NUDT15 and TPMT show wide inter-ethnic variabilities. Table exhibits inter-ethnic variabilities of the PGx variants and molecular phenotypes of the four thiopurine pharmacogenes computed from among 2504 subjects of the 1000 Genomes Project . NUDT15 non-WT (i.e., IM or PM) is common in East (22.6%) and South (13.9%) Asians but rare in Europeans and Africans (< 1%). In contrast, TPMT non-WT is common in Europeans (8.0%) and Americans (13.3%) but relatively rare in Asians (< 5.0%). Novel PGx variant, CRIM1 rs3821169, demonstrates remarkably high minor allele frequency (T = 0.255) and carrier prevalence (43.7%, 220/504) in East Asians. Table also shows that 6.5% of East Asians harbor homozygous CRIM1 rs3821169 variant, which can hardly be detected in other populations (< 1.0%). In contrast, IL6 rs13306435 is widely distributed with the highest carrier frequency of 15.0% in Americans and 3.0% among Asian and European populations; It is rare in South Asian and African populations (< 1.0%). The carrier frequencies of both IL6 and CRIM1 variants were 2.0% and 1.2% for East Asian and American populations, respectively. IL6 and CRIM1 We performed ROC analysis of GVB-based single- and multi-gene models to predict the last-cycle 6-MP DIPs using 240 both WTs for NUDT15 and TPMT to control their long-known PGx effects. Figure demonstrates that (b) GVB CRIM1 outperformed (a) GVB IL6 in predicting DIPs at all cutoff levels, probably due to the higher variant frequency of CRIM1 over IL6 in the study population. Two-gene model GVB IL6,CRIM1 (Fig. c) consistently outperformed each of the single-gene models (GVB IL6 and GVB CRIM1 ) at all cutoffs. For a comprehensive evaluation of all PGx interactions among NUDT15 , TPMT, IL6 , and CRIM1 , we performed comprehensive ROC analysis using data of all 320 pediatric patients with ALL (Fig. ). Among the four single-gene models in Fig. a,b,d), GVB NUDT15 outperformed others at all cutoffs, probably due to the high prevalence of NUDT15 variants and the strong metabolic impact of on thiopurine toxicity. Two-gene models (Fig. c,f) consistently outperformed each of the corresponding single-gene counterparts, i.e., the order of their AUCs was GVB NUDT15,TPMT > GVB NUT15 > GVB TPMT and of GVB IL6,CRIM1 > GVB CRIM1 > GVB IL6 at all cutoff levels. Three-gene models created by adding IL6 or CRIM1 to the traditional NUDT15 and TPMT model also consistently improved the prediction accuracy (Fig. g,h). The final four-gene model in Fig. (i) outperformed all other models in predicting DIPs at all cutoff levels. Moreover, it is worth noting that the ROC curves across eight DIP cutoffs in Fig. exhibited ‘dose–response relationships’, i.e., GVB score’s prediction power (measured by AUC) increases as a function of the severity of thiopurine intolerance (measured by DIP). That is, the final four-gene model’s AUC increased as a function of decreasing DIP (i.e., AUC <15% = 0.757, AUC <25% = 0.748, AUC <35% = 0.711, AUC <45% = 0.716, AUC <60% = 0.646, and AUC <80% = 0.592 in a descending order, Fig. i). We systematically compared the clinical utility as well as clinical validity of traditional star allele-based and GVB-based methods for preventing thiopurine toxicity. Table demonstrates the measures of clinical validity and potential population impact along with the pharmacogenetic association of the different prediction models . Because the designated star alleles for IL6 or CRIM1 were not available, star allele-based molecular phenotyping was not applicable for these novel genes. GVB NUDT15,TPMT slightly outperformed STAR NUDT15,TPMT , the classical star allele-based molecular phenotyping (Table a,b). Three-gene models (i.e., GVB NUDT15,TPMT,IL6 and GVB NUDT15,TPMT,CRIM1* also outperformed the two-gene models (Table c,d). The four-gene interplay model, GVB NUDT15,TPMT,IL6,CRIM1 ^( CRIM1,IL6 ) , presented the best performance for all the eight measures of clinical validity and potential population impact (except specificity) (marked in bold numbers in Table g). The addition of IL6 and CRIM1 to create the final four-gene model by integrating both common and rare alleles markedly improved PAF from 0.36 to 0.58 as well as RR (3.29–5.73) and OR (4.21–8.06). PAF is the proportion of events attributed to the PGx risk factor or the maximum percentage of cases that can be prevented if individuals who test positive for the PGx variants receive different treatments. In the group of all the patients with 6-MP toxicities (DIP < 25%), we could expect eight patients more with the GVB-based model than with the traditional star allele-based method [23 patients in Table a vs . 31 patients in Table f]. NNG is the number of patients that must be genotyped to prevent one patient from experiencing an adverse event. The NNG of 20 and an NNT of 5 of the traditional STAR NUDT15,TPMT showed that 5 out of every 20 patients genotyped would be found to have positive test results for genotyping and would need alternative treatment to prevent toxicity related 6-MP intolerance in one patient (DIP < 25%). Adding IL6 and CRIM1 to the traditional NUDT15 and TPMT testing to create GVB NUDT15,TPMT,IL6,CRIM1 ^( CRIM1,IL6 ) may require only 12.5 patients (37.5% improvement of NNG) to be genotyped to return 3.7 test-positive patients (26.0% improvement of NNT) receiving alternative treatment to prevent adverse event in one patient (Table g). Informed written consent was obtained from all subjects, and the study was approved by the ethics committees of Asan Medical Center, Seoul National University Hospital, and Samsung Medical Center. In the present study, the interplay between IL6 and CRIM1 variants in thiopurine intolerance due to hematological toxicity was investigated in 320 pediatric patients with ALL. IL6 has been known to modulate hematopoiesis and neutrophil trafficking, especially possessing a role in anti-apoptosis . In patients with osteomyelitis, IL6 was correlated with longer neutrophil survival apart from other cytokines; this anti-apoptotic effect was blocked using anti-IL6 antibodies and reversed with anti- IL6 . According to a recent study on chronic hepatitis C infection, the expression of anti-apoptotic genes was increased following in vitro IL6 treatment, with a considerable downregulation in T cell inhibitory receptors and caspase-3, thus indicating the promising nature of IL6 in enhancing lymphocyte effector functions . These anti-apoptotic roles seemed to be secondary to the effects of IL6 trans-signaling, such as the inhibition of the chemokines CXCL1, CXCL8 and CX3CL1 and the promotion of the chemokines CXCL5, CXCL6, and CCL2; however, the suppression of IL6 might result in blood cytopenia. The occurrence of neutropenia as an adverse effect of the IL6 inhibitor, tocilizumab, could also demonstrate the relationship between IL6 and neutrophil survival. IL6 mobilizes neutrophils into the circulating pool from the marginated pool comprising the lymph nodes and the spleen. Therefore, neutropenia due to a lack of IL6 induced by tocilizumab may indicate that IL6 has a critical role in enriching circulating neutrophils . IL6 variant, rs13306435, is located in exon 5 of the IL6 gene. The T>A variation of rs13306435 has an altered amino acid, from Asp to Glu. The T allele of 13306435 is reportedly associated with increased expression and plasma levels of IL6 . In this regard, patients with a heterozygous variant of rs13306435 might have decreased expression and plasma levels of IL6 compared with the patients with the WT characterization, resulting in reduced IL6 effects on neutrophils. CRIM1 is a cell-surface transmembrane protein that resembles developmentally important proteins which are known to interact with bone morphogenetic proteins (BMPs). A role of CRIM1 in drug resistance has been suggested by previous studies , revealing that the level of mRNA expression of CRIM1 is high in resistant leukemic cells. This affects the levels of BMPs, suggesting that CRIM1 regulates the growth and differentiation of hematopoietic cells. The rs3821169 heterozygous cases revealed lower mRNA expression levels than the WT cases, which indicated that subjects carrying this variant might display drug-sensitive responsiveness . Although we could not clarify the detailed mechanism underlying the interplay between IL6 and the CRIM1 variant, the presence of a negative feedback loop between IL6 and the BMP pathway was reported, in which increased levels of IL6 induced BMP pathway activities resulting in the suppression of IL6 . Based on our findings, it can be suggested that the interplay between IL6 and CRIM1 in thiopurine intolerance due to hematological toxicity may represent a pharmacodynamic effect leading to an adverse reaction, while the well-known NUDT15 and TPMT are pharmacokinetic enzymes for metabolizing thiopurines. The present study presented several limitations that need to be acknowledged, including the possible confounding effects from concomitant medications (methotrexate or vincristine) and the absence of serum level measurements of drugs or metabolites. Moreover, not all patients with thiopurine toxicity were explained by pharmacogenetic analysis. Seven (6.1%) of the 115 all-WT patients experienced thiopurine toxicity. Supplementary Table lists further candidate variants determined by analyzing the all WTs ( N = 115, p < 0.05 by one-sided Student’s t test). Of the three carriers of FSIP2 rs191083003, two (66.7%) exhibited DIP < 25% (8.82, 21.88, and 48.54%, N = 3). We observed one more FSIP2 rs191083003 carrier in the homozygous- CRIM1 group, who exhibited the lowest DIP of 6.94% within the entire ALL cohort ( N = 320). The low frequency (1.25%, 4/320) of FSIP2 rs191083003 prohibited any conclusion, necessitating further elucidation. Overall, the interplay between IL6 and CRIM1 and the well-known NUDT15 and TPMT improved PAF from 36.4 to 58.2% by considering PGx variants only in an East Asian cohort of pediatric ALL ( N = 320). The quantitative analytical approach employed in the present study could be applied to other ethnic groups for further discover and evaluate thiopurine-related pharmacogenomics. Reportedly, Americans present the highest allele frequency of IL6 rs13306435 (A = 0.078) among all ethnic groups (Global A = 0.020, the 1000 Genomes Project, Phase 3 ). This high inter-ethnic variability may partially explain why rs13306435 has not yet been identified as a biomarker for thopurine intolerance. Current research is mostly biased towards Europeans . NUDT15 rs116855232 variant, which was recently discovered in the Korean population as a strong predictor of thopurine toxicity , shows the highest allele frequency in East Asians (T = 0.095) among all ethnic groups (Global T = 0.040). Pharmacogenes, by definition, unlike pathogenic disease genes, do not have an overt phenotype unless exposed to drugs. The absence of detrimental phenotypic effect attributed to phrmacogenes may have permitted wide inter-ethnic variability and/or diversity across different ethnic groups under various evolutionary selection pressures. For pediatric ALL, the CPIC guideline for thiopurine treatment is based on the star allele-based haplotypes, with designated molecular phenotypes of NUDT15 and TPMT , . However, CPIC does not provide general standard rules for combining multi-gene interactions of the categorically classified star-alleles. Novel genes like IL6 and CRIM1 have no designated star alleles nor molecular phenotypes. The quantitative GVB method has benefits over categorical star allele-based approaches. GVB quantitates single- or multi-gene PGx burden of common, rare, and novel variants into a single score to provide a comprehensive framework for further PGx discovery and evaluation of many gene interactions. A conventional single variant-based association test of rare variants requires an infeasible magnitude of sample sizes ; however, approaches that aggregate common, rare, and novel variants jointly will substantially reduce the required effective sample sizes . In contrast to a traditional haplotyping-based method, GVB assigns a gene-level score for each pharmacogene without using population data, enabling an unbiased PGx approach, especially for under-investigated subpopulations. In summary, our results suggest an independent and/or additive effect of the interplay between IL6 rs13306435 and CRIM1 rs3821169 on thiopurine intolerance attributed to hematological toxicity in pediatric ALL . Supplementary Information 1.
Informe 2023. Necesidades prioritarias para la medicina de familia, para la atención primaria en España
3e3b9551-d01b-44a6-826d-50fa93d1d2f9
10485783
Family Medicine[mh]
null
5113a8bd-324a-423f-9397-723367bd98a7
8380063
Pharmacology[mh]
INTRODUCTION Pharmacology is the study of how medicines and other drugs work and are processed by the body. As such, pharmacology undergraduates require a comprehensive understanding of both in vitro and in vivo drug responses. Despite advances in molecular biologic techniques, studies in isolated cells and tissues do not fully model the complex interactions observed in whole organisms. Currently, in vivo validation remains critical in the drug discovery and development process. In order to limit the use of animal models at this preclinic stage, the principles of replacement, reduction, and refinement, the 3Rs, aims to address the potential harms to animals while supporting high‐quality science and translation by addressing the benefits of this research. Animal use in pharmacology education has steadily declined in the last 30 years, with their use in education and training accounting for <1% of in vivo experimental procedures in 2019 and <2% of students taught in vivo skills during their degree. Learned societies, such as the British Pharmacologic Society, have highlighted the importance of these skills through inclusion in recommended curricula within pharmacology education. , Despite significant financial contributions from these societies and the pharmaceutical industry, in vivo skills remain an area of concern. , , , We present a novel invertebrate model, Lumbriculus variegatus , for use in whole‐organism studies in a teaching environment. Excluding any living cephalopod, invertebrates are not covered under the Animal (Scientific Procedures) Act 1986, and this organism, therefore, offers the opportunity for utilization within pharmacology education. L . variegatus is an aquatic worm inhabiting shallow freshwater ponds, lakes, and marshes, and these animals have been extensively characterized as indicator organisms for toxic compounds in aquatic systems. , , , , , Touching the anterior of L . variegatus results in retraction and the reversal of body position, whereas touching the tail elicits helical swimming. These behaviors have previously been described and used to determine the effects of exogenous compounds on L . variegatus . , , Despite the previous works of literatures on ecological toxicology, much less is known about L . variegatus reaction to drug compounds. , L . variegatus enables the inclusion of practical in vivo pharmacology experiments to improve student learning and confidence within the laboratory as well as training in in vivo pharmacology. This organism is low‐cost and exempt from much of the regulation and ethical challenges associated with conventional in vivo models, which often prevent in vivo practical classes. , Using novel assays which have been developed to be easily transferred for inclusion within the education setting, we present the effects of three distinct ion channel blockers on the behavior of L . variegatus . Specifically, the ability to perform the stereotypical behaviors of body reversal and helical swimming following tactile stimulation and unstimulated free locomotion, in the presence of dantrolene, a ryanodine receptor antagonist, lidocaine, a voltage‐gated sodium channel blocker, and quinine, a nonselective sodium and potassium channel blocker. Our aim was to develop a novel whole animal model for use within a teaching environment for the demonstration of fundamental pharmacologic principles and techniques. This was tested in a first‐year medical pharmacology laboratory practical, and anecdotal feedback collected, alongside experimental data. We found that L . variegatus is a technically straightforward yet effective animal model for the teaching of in vivo pharmacology. Additionally, this organism has broader potential in pharmacologic education, analogous to Caenorhabditis elegans and Drosophila melanogaster . METHODS 2.1 Lumbriculus variegatus culture L . variegatus were purchased from Alfa Fish Foods and laboratory‐reared in aquariums containing artificial pondwater, composed as previously described by O’Gara et al., using UV‐treated deionized water produced by Elix® Essential 3 UV Water Purification System. The artificial pondwater was continuously aerated and water filtered using commercial air stones and aquarium filters, respectively. The pH was not monitored or adjusted once the worms were placed in the water. The aquariums were kept at room temperature (18–21°C) and subject to a 16:8‐h light‐dark cycle. Cultures were fed TetraMin flakes and 10 mg/L spirulina weekly. L . variegatus populations were maintained for a minimum of 3 months before experimentation to limit variation in the colony. Individual worms used in experiments were randomly selected, lacked any obvious morphological defects, and ranged from 2 to 8 cm in length as per previous studies as we observed no size‐dependent changes within this range. 2.2 Reagents and solutions Dantrolene, lidocaine, and quinine were obtained from Sigma‐Aldrich (Dorset, United Kingdom). Dantrolene and quinine were dissolved in 100% dimethyl sulfoxide (DMSO) (Sigma‐Aldrich) for a stock solution concentration of 10 mM and 200 mM, respectively. Dantrolene and quinine were diluted in artificial pondwater to give a final DMSO concentration of 0.5% and a maximum final concentration of 50 μM and 1 mM, respectively. Artificial pondwater with 0.5% DMSO was used as a vehicle control for dantrolene and quinine experiments. Lidocaine was dissolved in artificial pondwater to give a maximum of 1 mM final concentration and artificial pondwater was used as the vehicle control. 2.3 Stereotypical movement assay Eighteen to 24 h before experimentation, one L . variegatus worm was placed in each well of a Cellstar® 6‐well plate (Greiner Bio‐One) containing 4 ml of artificial pondwater. Plates were kept at room temperature and subject to a 16:8‐h light‐dark. After this acclimation period, the pondwater was replaced and the baseline ability of the worm to perform stereotypical behaviors was tested and recorded (Baseline). This was done by alternately stimulating the anterior or posterior ends of the body with a clean 20–200 μl plastic pipette tip, 5 times per end, with a 5–10‐s interval between stimuli. Movement was scored as 1 = no movement, 2 = incomplete stereotypical movement, 3 = full stereotypical movement. The artificial pondwater was then removed and immediately replaced with a drug or a vehicle (artificial pondwater only or 0.5% DMSO in artificial pondwater) was added. After a 10‐min incubation with a drug solution or vehicle, the worms were tested again using the same procedure (drug exposure). For the “rescue” experiments, the drug solution or vehicle was aspirated from the well, and to remove any latent drug or vehicle residue, fresh pondwater was added and then immediately aspirated and then replaced with fresh, untreated pondwater. These worms were then retested at 10 min (Rescue 10 mins) and 24 h (Rescue 24 h) postdrug or vehicle treatment. Data are expressed as a ratio of the movement score while in treatment relative to baseline. The data collection methods for these assays are shown in Figure . 2.4 Free locomotion assays As in the stereotypical movement assay, 18–24 h before testing, worms were placed individually in Cellstar® 6‐well plates with fresh pondwater and kept at room temperature and subject to a 16:8‐h light‐dark. Following this acclimation period, pondwater was replaced with 2 ml fresh artificial pondwater to limit movement in the z ‐axis, and baseline free locomotion was recorded by rapid, sequential image collection with a 13‐megapixel camera at a rate of one image per second for 50 s. Images were then collected 10 min after removing and immediately replacing artificial pondwater wiperth a drug solution or vehicle. Drug solutions and vehicle controls were then removed, the wells washed, and fresh pondwater added. Rescue experiments were collected at 10 min (Rescue 10 mins) and 24 h (Rescue 24 h) after drug or vehicle removal. Collected images were analyzed using ImageJ software. These images were compiled into a z ‐stack image, this being a compilation of photographs taken at 1‐s intervals over 50 s. An area of known distance within each z‐stack image was measured and ImageJ calibrated to pixels per centimeter (pixels/cm) within each image. To determine the area traversed by each worm, the foreground and background were separated using the thresholding functionality of ImageJ to separate the pixels activated by L . variegatus from those activated by the 6‐well plate. The total area covered by the L . variegatus at baseline, drug exposure, Rescue 10 min, and Rescue 24 h was then determined based on the calibration of pixels/cm within ImageJ. Data are expressed as a percentage of the area covered by L . variegatus in baseline conditions. The data collection method for this assay is shown in Figure . For both assays and as per previous studies that have used L . variegatus , decompositions, as determined by visible tissue degeneration and whole‐organism tissue pallor, at assay endpoints was the main indicator of lethal toxicity. L . variegatus were only exposed to one test compound and euthanized at assay endpoints by rapid submersion in 70% ethanol. 2.5 Statistical analysis The sample size for each assay and treatment was eight worms. Data are displayed as the mean ± standard error of the mean (SEM) for each data set. Data are relative to the untreated, baseline control condition. Values for each behavioral measurement were compared with the untreated control conditions (baseline) for each L . variegatus per condition. Statistical analysis was performed in GraphPad Prism 9. Drug exposure conditions were compared with baseline conditions by paired nonparametric two‐tailed t test for stereotypical movement assays and paired parametric two‐tailed t test for free locomotion assays. A two‐way ANOVA with Dunnett's posttest was used to analyze 10‐min and 24‐h rescue time points compared with baseline conditions for both assays. p < .05 was the threshold for statistical significance. Lumbriculus variegatus culture L . variegatus were purchased from Alfa Fish Foods and laboratory‐reared in aquariums containing artificial pondwater, composed as previously described by O’Gara et al., using UV‐treated deionized water produced by Elix® Essential 3 UV Water Purification System. The artificial pondwater was continuously aerated and water filtered using commercial air stones and aquarium filters, respectively. The pH was not monitored or adjusted once the worms were placed in the water. The aquariums were kept at room temperature (18–21°C) and subject to a 16:8‐h light‐dark cycle. Cultures were fed TetraMin flakes and 10 mg/L spirulina weekly. L . variegatus populations were maintained for a minimum of 3 months before experimentation to limit variation in the colony. Individual worms used in experiments were randomly selected, lacked any obvious morphological defects, and ranged from 2 to 8 cm in length as per previous studies as we observed no size‐dependent changes within this range. Reagents and solutions Dantrolene, lidocaine, and quinine were obtained from Sigma‐Aldrich (Dorset, United Kingdom). Dantrolene and quinine were dissolved in 100% dimethyl sulfoxide (DMSO) (Sigma‐Aldrich) for a stock solution concentration of 10 mM and 200 mM, respectively. Dantrolene and quinine were diluted in artificial pondwater to give a final DMSO concentration of 0.5% and a maximum final concentration of 50 μM and 1 mM, respectively. Artificial pondwater with 0.5% DMSO was used as a vehicle control for dantrolene and quinine experiments. Lidocaine was dissolved in artificial pondwater to give a maximum of 1 mM final concentration and artificial pondwater was used as the vehicle control. Stereotypical movement assay Eighteen to 24 h before experimentation, one L . variegatus worm was placed in each well of a Cellstar® 6‐well plate (Greiner Bio‐One) containing 4 ml of artificial pondwater. Plates were kept at room temperature and subject to a 16:8‐h light‐dark. After this acclimation period, the pondwater was replaced and the baseline ability of the worm to perform stereotypical behaviors was tested and recorded (Baseline). This was done by alternately stimulating the anterior or posterior ends of the body with a clean 20–200 μl plastic pipette tip, 5 times per end, with a 5–10‐s interval between stimuli. Movement was scored as 1 = no movement, 2 = incomplete stereotypical movement, 3 = full stereotypical movement. The artificial pondwater was then removed and immediately replaced with a drug or a vehicle (artificial pondwater only or 0.5% DMSO in artificial pondwater) was added. After a 10‐min incubation with a drug solution or vehicle, the worms were tested again using the same procedure (drug exposure). For the “rescue” experiments, the drug solution or vehicle was aspirated from the well, and to remove any latent drug or vehicle residue, fresh pondwater was added and then immediately aspirated and then replaced with fresh, untreated pondwater. These worms were then retested at 10 min (Rescue 10 mins) and 24 h (Rescue 24 h) postdrug or vehicle treatment. Data are expressed as a ratio of the movement score while in treatment relative to baseline. The data collection methods for these assays are shown in Figure . Free locomotion assays As in the stereotypical movement assay, 18–24 h before testing, worms were placed individually in Cellstar® 6‐well plates with fresh pondwater and kept at room temperature and subject to a 16:8‐h light‐dark. Following this acclimation period, pondwater was replaced with 2 ml fresh artificial pondwater to limit movement in the z ‐axis, and baseline free locomotion was recorded by rapid, sequential image collection with a 13‐megapixel camera at a rate of one image per second for 50 s. Images were then collected 10 min after removing and immediately replacing artificial pondwater wiperth a drug solution or vehicle. Drug solutions and vehicle controls were then removed, the wells washed, and fresh pondwater added. Rescue experiments were collected at 10 min (Rescue 10 mins) and 24 h (Rescue 24 h) after drug or vehicle removal. Collected images were analyzed using ImageJ software. These images were compiled into a z ‐stack image, this being a compilation of photographs taken at 1‐s intervals over 50 s. An area of known distance within each z‐stack image was measured and ImageJ calibrated to pixels per centimeter (pixels/cm) within each image. To determine the area traversed by each worm, the foreground and background were separated using the thresholding functionality of ImageJ to separate the pixels activated by L . variegatus from those activated by the 6‐well plate. The total area covered by the L . variegatus at baseline, drug exposure, Rescue 10 min, and Rescue 24 h was then determined based on the calibration of pixels/cm within ImageJ. Data are expressed as a percentage of the area covered by L . variegatus in baseline conditions. The data collection method for this assay is shown in Figure . For both assays and as per previous studies that have used L . variegatus , decompositions, as determined by visible tissue degeneration and whole‐organism tissue pallor, at assay endpoints was the main indicator of lethal toxicity. L . variegatus were only exposed to one test compound and euthanized at assay endpoints by rapid submersion in 70% ethanol. Statistical analysis The sample size for each assay and treatment was eight worms. Data are displayed as the mean ± standard error of the mean (SEM) for each data set. Data are relative to the untreated, baseline control condition. Values for each behavioral measurement were compared with the untreated control conditions (baseline) for each L . variegatus per condition. Statistical analysis was performed in GraphPad Prism 9. Drug exposure conditions were compared with baseline conditions by paired nonparametric two‐tailed t test for stereotypical movement assays and paired parametric two‐tailed t test for free locomotion assays. A two‐way ANOVA with Dunnett's posttest was used to analyze 10‐min and 24‐h rescue time points compared with baseline conditions for both assays. p < .05 was the threshold for statistical significance. RESULTS 3.1 Behavioral response to dantrolene The ryanodine receptor antagonist, dantrolene, had no significant effects on stereotypical movements at ≤25 µM. However, we found that 50 µM minimally but significantly inhibited body reversal after 10 min exposure ( p = .0313, Figure ). This was not the case for helical swimming ( p > .05, Figure ). Ten min after removal of 50 µM dantrolene and incubation in artificial pondwater, body reversal was significantly reduced compared with baseline ( p = .0121, Figure ). We also observed that helical swimming movements were reduced 10 min after the removal of 25 µM ( p = .0088, Figure ) and 50 µM dantrolene ( p = .0081, Figure ). These effects persisted for 24‐h after exposure to both 25 µM ( p = .0290, Figure ) and 50 µM dantrolene ( p = .0015, Figure ) but were only observed for the helical swimming and not body reversal. Despite this prolonged effect on movement, at 24 h, the worms were still alive with no signs of tissue decomposition. The helical swimming and body reversal assays rely on tactile stimulation by an observer. In addition to stimulated behaviors, we also wanted to determine if unstimulated free locomotion was affected by our test compounds. Figure shows that for dantrolene, we observed no significant differences between baseline free locomotion and drug treatment (Figure ) except for 5 µM. This treatment significantly increased free locomotion by 18.30 ± 10.85% compared with baseline ( p = .0341, Figure ). 3.2 Behavioral response to lidocaine As shown in Figure , we found that the sodium channel blocker lidocaine significantly inhibited both body reversal and helical swimming at 0.5 mM and 1 mM. At concentrations ≤0.5 mM, these effects were reversed following 10 min in drug‐free artificial pondwater, with no significant difference compared with baseline ( p > .05, Figure ). However, the effect of 1 mM lidocaine persisted 10 min after removal, significantly inhibiting both body reversal ( p = .0115, Figure ) and helical swimming ( p = .0035, Figure ). Twenty‐four hours after lidocaine exposure, both movements returned to baseline levels ( p > .05, Figure ). We also observed these dose‐dependent effects in the free locomotion assays (Figure ), where the movement was significantly reduced at 0.5 mM (76.24 ± 5.23%) and 1mM (85.92 ± 5.23%) lidocaine compared with baseline levels ( p < .0001, Figure ). Similar to the stereotypical movement assay, movement returned to baseline levels at both 10 min and 24 h after drug exposure( p > .05, Figure ). 3.3 Behavioral response to quinine Following on from lidocaine, we sought to examine the effect of a different ion channel blocker on L . variegatus . Figure shows that the nonspecific sodium and potassium channel blocker quinine inhibited both body reversal and helical swimming at equimolar concentrations to lidocaine (0.5 mM and 1 mM). However, unlike lidocaine, these effects persisted after 10 min and 24 h in drug‐free artificial ( p < .0001, Figure ). We observed similar results in the free locomotion assay (Figure ). Movement increased by 45.5±7.70% after exposure to 0.01 mM quinine ( p = .0006, Figure ), however, this effect was reversed and returned to baseline conditions after 10 min and 24 h in drug‐free artificial pondwater ( p > .05, Figure ). Conversely, free locomotion was inhibited by 52.59 ± 4.04% at 0.5 mM ( p < .0001, Figure ) and 90.15 ± 1.67% at 1 mM concentrations ( p < .0001, Figure ). This inhibitory effect persisted after 10 min in drug‐free artificial pondwater with 0.5 mM inhibiting movement by 32.68 ± 5.51% ( p = .0112, Figure ) and by 83.04 ± 2.98% at 1 mM ( p < .0001, Figure ) as well as after 24 h ( p < .0001, Figure ). At this point, as with dantrolene (Figure ), despite the prolonged effect on movement, the worms were alive with no signs of tissue decomposition. Behavioral response to dantrolene The ryanodine receptor antagonist, dantrolene, had no significant effects on stereotypical movements at ≤25 µM. However, we found that 50 µM minimally but significantly inhibited body reversal after 10 min exposure ( p = .0313, Figure ). This was not the case for helical swimming ( p > .05, Figure ). Ten min after removal of 50 µM dantrolene and incubation in artificial pondwater, body reversal was significantly reduced compared with baseline ( p = .0121, Figure ). We also observed that helical swimming movements were reduced 10 min after the removal of 25 µM ( p = .0088, Figure ) and 50 µM dantrolene ( p = .0081, Figure ). These effects persisted for 24‐h after exposure to both 25 µM ( p = .0290, Figure ) and 50 µM dantrolene ( p = .0015, Figure ) but were only observed for the helical swimming and not body reversal. Despite this prolonged effect on movement, at 24 h, the worms were still alive with no signs of tissue decomposition. The helical swimming and body reversal assays rely on tactile stimulation by an observer. In addition to stimulated behaviors, we also wanted to determine if unstimulated free locomotion was affected by our test compounds. Figure shows that for dantrolene, we observed no significant differences between baseline free locomotion and drug treatment (Figure ) except for 5 µM. This treatment significantly increased free locomotion by 18.30 ± 10.85% compared with baseline ( p = .0341, Figure ). Behavioral response to lidocaine As shown in Figure , we found that the sodium channel blocker lidocaine significantly inhibited both body reversal and helical swimming at 0.5 mM and 1 mM. At concentrations ≤0.5 mM, these effects were reversed following 10 min in drug‐free artificial pondwater, with no significant difference compared with baseline ( p > .05, Figure ). However, the effect of 1 mM lidocaine persisted 10 min after removal, significantly inhibiting both body reversal ( p = .0115, Figure ) and helical swimming ( p = .0035, Figure ). Twenty‐four hours after lidocaine exposure, both movements returned to baseline levels ( p > .05, Figure ). We also observed these dose‐dependent effects in the free locomotion assays (Figure ), where the movement was significantly reduced at 0.5 mM (76.24 ± 5.23%) and 1mM (85.92 ± 5.23%) lidocaine compared with baseline levels ( p < .0001, Figure ). Similar to the stereotypical movement assay, movement returned to baseline levels at both 10 min and 24 h after drug exposure( p > .05, Figure ). Behavioral response to quinine Following on from lidocaine, we sought to examine the effect of a different ion channel blocker on L . variegatus . Figure shows that the nonspecific sodium and potassium channel blocker quinine inhibited both body reversal and helical swimming at equimolar concentrations to lidocaine (0.5 mM and 1 mM). However, unlike lidocaine, these effects persisted after 10 min and 24 h in drug‐free artificial ( p < .0001, Figure ). We observed similar results in the free locomotion assay (Figure ). Movement increased by 45.5±7.70% after exposure to 0.01 mM quinine ( p = .0006, Figure ), however, this effect was reversed and returned to baseline conditions after 10 min and 24 h in drug‐free artificial pondwater ( p > .05, Figure ). Conversely, free locomotion was inhibited by 52.59 ± 4.04% at 0.5 mM ( p < .0001, Figure ) and 90.15 ± 1.67% at 1 mM concentrations ( p < .0001, Figure ). This inhibitory effect persisted after 10 min in drug‐free artificial pondwater with 0.5 mM inhibiting movement by 32.68 ± 5.51% ( p = .0112, Figure ) and by 83.04 ± 2.98% at 1 mM ( p < .0001, Figure ) as well as after 24 h ( p < .0001, Figure ). At this point, as with dantrolene (Figure ), despite the prolonged effect on movement, the worms were alive with no signs of tissue decomposition. DISCUSSION L . variegatus has been extensively characterized as indicator organisms for toxic compounds in aquatic systems and proposed as a standard organism for sediment bioaccumulation tests. , , , , , , Herein, we demonstrated that L . variegatus has application as an effective model for the teaching of the effects of pharmacological agents on an intact system. Using two novel assays, we describe three different behavioral endpoints, body reversal, helical swimming, and free locomotion, that can be measured without the need for costly or specialized equipment, regulatory approval, or highly specialized animal housing facilities—requirements that often prevent the teaching of in vivo practical skills. , We recognize that experiments conducted in invertebrates do not replicate the complexity of higher animals and experiments conducted in invertebrates do not wholly replace studies in vertebrate species, such as mice and rats. However, grounding students in experience with L . variegatus will provide many of them with the whole animal experience they otherwise would not have while providing an excellent foundation experience for those who will go on to research higher organisms. Experience with this organism exposes students to concepts around replacement, refinement, and reduction in animal experimentation but also gives them the direct experimental experience of putting these elements into practice. There are other invertebrate models available for use in pharmacology and biomedical sciences teaching: C . elegans , D . melanogaster , and others. However, in the classroom laboratory, L . variegatus presents some advantages over these organisms. First, the larger size of L . variegatus (50–80 mm) compared with C . elegans (~1 mm) makes it easier to view as an individual. , Second, for the assays presented, the drug exposure time and rescue time points have been demonstrated to be sufficient for several compounds tested to elicit effects and enable educators to complete these experiments within a standard laboratory practical teaching timeframe. Moreover, when implemented within first‐year undergraduate toxicology teaching, students have reported that these assays “[were] really helpful in helping me understand our course content,” they “made you think like a scientist” and that they were “stimulating and enjoyable.” In the stereotypical movement assay (Figure ), students can measure the effects of drugs on reducing two different behaviors (body reversal and helical swimming) without the need for a microscope or a specialist equipment, unlike other models used in teaching such as C . elegans . This assay allows students to distinguish L . variegatus that do not perform body reversal or helical swimming movements from worms that do—giving them hands‐on semi‐quantitative in vivo pharmacology training. The relative ease of the tactile stimulation application and the simplicity of the scoring system minimize the risk of misinterpretation of the movements and limit any variation between students conducting the assay. The free locomotion assay is a more sophisticated and quantitative experiment, which allows students to engage in movement recording and quantitative analysis. Both assays offer educators the opportunity to use these experiments to engage students with in vivo measurement and scoring, data recording and interpretation, and statistical analysis. There is also excellent potential for introducing other key curriculum concepts such as experimental blinding, molarity calculations, drug solubility, and toxicology. Further assay development may yield practical teaching protocols for behavioral assays such as tolerance and place preference, in vitro assays such as receptor binding and immunohistochemistry, and drug dose–response relationship for other physiological measurements, for example, pulse rate. No in vivo model is without limitations. One constraint in studying aquatic organisms is tested compound solubility. Our ability to investigate dantrolene was limited due to its known solubility and precipitation issues. , As such, 50 µM was the maximal concentration achieved using DMSO (0.5%) in artificial pondwater. Similarly, lidocaine and quinine were used at maximal concentrations dictated by their solubility. Additionally, it should be noted that DMSO (0.5%) as a vehicle produced no significant changes in worm behavior compared with baseline (Figures and ). Further investigation into drug compound solubility and the effects of different vehicles on these worms are needed to fully identify the limitations of this specific model. The differential acute and long‐term effects of dantrolene, lidocaine, and quinine on L . variegatus behavior suggest that these drugs are working through distinct mechanisms. It is not known if this species expresses the sites of action for their respective mechanisms of action. Currently, this is limited is by the current lack of genomic information on L . variegatus ; it is full genome has not been sequenced and protein expression studies are limited. This lack of knowledge presents both limitations and opportunities for further pharmacological investigation of these animals. For example, in our study, dantrolene did not demonstrate a straightforward dose‐dependent effect on L . variegatus stereotypical movements (Figure ) or free locomotion (Figure ). The 5 µM produced locomotor activation while higher concentrations did not. This variable response may be due to a lack, or the alteration, of the dantrolene binding site within ryanodine receptors or their homologs. It may also be differential absorption, distribution, metabolism, and excretion of pharmacological compounds that may account for the varying results observed here. Studies have demonstrated that L . variegatus express the ATP‐binding cassette transporter protein, p‐glycoprotein, but further study is required to dissect the applicability of pharmacokinetics within L . variegatus to pharmacological studies. Lidocaine and quinine, however, demonstrated clear dose‐dependent effects on both stereotypical movements (Figures and ,) and free locomotion (Figures and ) at 0.5–1 mM. Interestingly, quinine demonstrated a 45.5±7.70% increase in free locomotion at 0.01 mM ( p = .0006, Figure ) and then effects became inhibitory. This may be due to off‐target toxicity at concentrations >0.1 mM. Throughout our assays, we observed no evidence of decomposition of L . variegatus 24 h after drug exposure, indicating that doses used were sublethal. While we have shown that these compounds induce differential pharmacodynamic effects with lidocaine being readily reversible and quinine having long term, but sublethal, further study is required to elucidate the full pharmacokinetic and pharmacodynamic profile of this species. L . variegatus , and the novel assays we present, will emphasize to students the use and importance of animals in pharmacological research and drug development while giving them hands‐on experience in a living system. The purpose of our study was not to provide a full evaluation of the resource by students but anecdotal comments and feedback from them indicate that they enjoy using the worms and they recognize the practical skills they have gained in doing so. It is important to ensure that pharmacology students continue to receive training in in vivo pharmacology at a time when few students currently do. Educators must seek to address the skills gap and prepare successful graduates , , , —both for our students’ benefit and for the continued advancement of the pharmacology discipline. The authors declare that they have no conflict of interest.
Comparison of a Novel Handheld Telehealth Device with Stand-Alone Examination Tools in a Clinic Setting
56622b1b-38f7-442e-935f-efaf9d6e2665
6918850
Pediatrics[mh]
Both the American Academy of Pediatrics and the Academy of Family Medicine have recommended guidelines for scheduled pediatric clinic visits. , These visits are typically performed in an office or in a clinic setting, incorporating a history and physical examination, developmental assessment, aspects of preventative medicine, and immunizations. Assessment of the ill child may occur in the office, urgent care, or emergency department settings. There is increasing interest in the provision of these visits in remote care settings that may also include the home, school, day care, or other health care facilities. To ensure quality encounters, clinicians and patients must utilize reliable validated diagnostic equipment and streamlined methods of data collection and transfer. In recent years, digital remote examination tools, such as the digital stethoscope and otoscope, have been incorporated into clinical practice and particularly in telemedicine solutions. Telemedicine is the provision of health care utilizing telecommunication technologies augmented by the use of peripheral examination tools. , Pediatric telehealth services have been incorporated into health care systems, hospitals, emergency rooms, out-patient clinics, schools, and day-care settings with evidence showing clinical effectiveness for the diagnosis and treatment of acute illness. These models often strive to replicate in-person services and as such, models have been published in the peer-reviewed literature. There is evidence that remote consultation can modify health practices and treatment compliance in peripheral environments for specific diseases. The introduction of a telemedicine model in suburban childcare centers using validated diagnostic tools has resulted in significantly reduced pediatric office and emergency department visits along with the additional benefit of a reduction in parental absences from work. The utilization of digital diagnostic technologies in clinical practice has particular potential in underserved and remote areas, potentially reducing interhospital transfers and waiting times to access specialty services and improved clinical outcomes for children. The Tyto device (TytoCare Ltd., Israel) is a novel examination system that includes a built-in examination camera, an infrared thermometer, a wireless communication unit, a lithium ion battery, and a touch screen. The system also incorporates a digital stethoscope, a digital otoscope, and a tongue depressor. The Tyto platform enables the capacity for live video or store and forward applications, and users can be directed by voice- or on-screen instructions to obtain images and sounds to comport with the standard of care by enabling a remote physical examination. Neither the camera nor thermometer was used in this study. The purpose of the study was to assess the clinical validity and reliability of a next-generation novel Food and Drug Administration (FDA)-cleared, multifunction comprehensive telehealth examination tool and compare it with the stand-alone digital peripheral examination devices currently deployed in our telehealth program. The primary aim of this study was to determine whether the novel device showed diagnostic equivalency with the commercially available stand-alone FDA-cleared telehealth examination devices routinely used in our telehealth program. There are many other devices on the market that were not studied. A secondary aim was to compare the images and sounds obtained with each of the devices to define which was best able to provide the physician with accurate clinical information. No attempt was made to make a clinical diagnosis such as aortic stenosis or otitis media from the data. The study was done to assess validity of a novel device. Both the novel and stand-alone devices adhere to the International Electrotechnical Commission standard for medical products and have received 510K clearance by the U.S. FDA. The study was approved by the local Institutional Review Board with parental/guardian consent obtained in each case for participation. Data were prospectively collected from children of ages 2–18 years presenting for their scheduled visit to a Pediatric Cardiology Clinic at the University of Virginia Children's Hospital (Charlottesville, VA) affiliated with the UVA Health System and the UVA School of Medicine. A standard physical examination was performed in the clinic by faculty physicians. Once consent was obtained, the study nurse obtained heart sounds, lung sounds, and images of both tympanic membranes with the TytoCare device and the two stand-alone digital examination devices (One Digital Stethoscope (Thinklabs Medical LLC) and the Horus HD Digital Scope System (JEDMED Instrument Co). The images and sounds were randomized by device before data acquisition and subsequent review. In each examination, the following information was recorded from each participant: Four heart sounds from the novel device and four from the stand-alone stethoscope. Six lung sounds from the novel device (front/back of body) and six from the stand-alone stethoscope. Two ear images from the novel device (left/right) and two from the stand-alone otoscope. All data were loaded onto a secure server. The sounds and images were reviewed by eight physicians (two fellows in cardiology, two pediatric pulmonologists, two general pediatricians, and two pediatric cardiologists). The images and sounds were reviewed on a secured website and the reviewers were blind to the device and the subject. Exclusion criteria for the study were patients with skin complaints, which might limit device use, patients with cognitive impairment, and cases wherein parental or guardian consent could not be obtained. Skin conditions such as severe inflammation might limit cooperation from discomfort. No patients who consented were excluded for skin conditions or cognitive impairment. The heart examination with each of the devices comprised recordings of the four standard auscultation points (aortic, pulmonic, tricuspid, and mitral). Lung auscultation was conducted at six standardized points (two anterior and four posterior), recording an 8 s audio at each site. All reviewers recorded their opinions of the quality of the images and sounds on a Likert scale between 1 and 5 (where 1 = very good and 5 = very poor) such that higher mean values signified worse overall diagnostic quality. Categorical ratings were made on a scale for diagnosis at each anatomic site such that 1 = no clinical finding, 2 = significant clinical finding, and 3 = presence/absence of significant clinical finding could not be determined based upon the information provided. Statistical Methods Statistical analyses were performed using IBM SPSS ® Version 13.0 Software (SPSS, Inc., Chicago, IL). As there were 24 recordings and images per patient (10 auscultation recordings plus 2 images with both the novel and the stand-alone devices), a minimum of 30 patients would be needed for the provision of 360 data points assessable by each of the measurement systems (720 total). This calculation exceeded the sample size needed to demonstrate a probability difference of 0.3 between the groups. Quality assessments of the devices were recorded as means + standard deviation with comparisons of continuous variables using the two-sampled t -test or the Wilcoxon rank sum test where appropriate. Chi-squared, Fisher's exact testing, and proportional Z tests were used where appropriate for comparisons of quality on the categorical ratings and for diagnostic group assignment. The internal consistency of the data was measured with Cronbach's alpha and the reproducibility with the intraclass correlation coefficient. It was assumed that those who were rating the devices and measuring each data point were representative of the rating population as a whole, permitting estimates of both inter- and intrareliability, where a preset threshold >0.80 was considered good. Confidence limits of 95% were determined with p values <0.05 considered significant. Statistical analyses were performed using IBM SPSS ® Version 13.0 Software (SPSS, Inc., Chicago, IL). As there were 24 recordings and images per patient (10 auscultation recordings plus 2 images with both the novel and the stand-alone devices), a minimum of 30 patients would be needed for the provision of 360 data points assessable by each of the measurement systems (720 total). This calculation exceeded the sample size needed to demonstrate a probability difference of 0.3 between the groups. Quality assessments of the devices were recorded as means + standard deviation with comparisons of continuous variables using the two-sampled t -test or the Wilcoxon rank sum test where appropriate. Chi-squared, Fisher's exact testing, and proportional Z tests were used where appropriate for comparisons of quality on the categorical ratings and for diagnostic group assignment. The internal consistency of the data was measured with Cronbach's alpha and the reproducibility with the intraclass correlation coefficient. It was assumed that those who were rating the devices and measuring each data point were representative of the rating population as a whole, permitting estimates of both inter- and intrareliability, where a preset threshold >0.80 was considered good. Confidence limits of 95% were determined with p values <0.05 considered significant. Between July and October 2016, 50 children were enrolled in the study that included 23 males and 27 females (mean overall age 10.5 years [range 2–17]). Of the cohort, 12 had known structural cardiac abnormalities, with 1 child having a history of arrhythmia. The remainder of the children were referred because of a range of conditions including murmur, chest pain, palpitations, and syncope. Images and sounds obtained with the novel device were judged as of higher quality when compared with those obtained with the stand-alone peripheral device (mean quality score overall = 2.8 + 1.05 vs. 3.39 + 0.94, respectively, where smaller means equaled better quality; p < 0.001). gives comparisons between the novel and the stand-alone device with regard to image and sound quality. The novel device was more likely to be rated higher overall, with less chance that the clinician was unable to document a clinical finding when using the device ( p = 0.001) as compared with the stand-alone devices. Both the Cronbach alpha and intraclass correlation coefficients for both inter- and intrareliability exceeded the preset threshold of 0.80 (0.84–99, 0.90–0.99, respectively). gives similar comparisons for clinical findings, in which the novel device was more likely than the stand-alone devices in enabling the clinicians to detect a clinical finding ( p < 0.0001) as well as for ear, heart, and lung findings, respectively ( p < 0.0001, p < 0.0001, p = 0.004, respectively). Similarly, the Cronbach alpha and intraclass correlation coefficients for both inter- and intrareliability exceeded the preset threshold of 0.80 for diagnosis (0.85–0.99, 0.89–0.99, respectively). This prospective cohort study from a single outpatient pediatric cardiology clinic demonstrated that the quality of images and sounds obtained using the novel device was of higher quality than those obtained using stand-alone remote examination devices routinely used in the University of Virginia telemedicine program. The novel device, which incorporates a digital otoscope, stethoscope, examination camera, and thermometer, was shown to more adequately enable remote diagnosis (no camera images or temperature data were collected in this study, so no comment can be made). There was a high level of intra- and inter-reliability for the recorded measurements of the heart, lungs, and ears. In evaluating children referred to a specialized pediatric cardiology clinic, clinicians more correctly assessed abnormal clinical findings using the novel device. There are several issues that may be seen as limitations of this study. The participants were asked to participate on the day of a scheduled visit to the cardiology clinic. Most children were well with no cardiac pathology. It would be expected that the majority would have normal heart sounds, normal lung sounds, and normal ear examinations without significant findings. The ear images were not reviewed by an otolaryngologist and for the most part did not reveal pathology other than occasional scarring of the tympanic membrane (noted by the reviewers). Many of the ear examinations were limited by earwax that was not removed. The aim of the study was to compare the novel stand-along device with standard digital device tools. The aim of the study was not to make a diagnosis or confirm findings from the clinic visit with the novel or standard device. The value and reliability of assessment of heart sounds with a digital stethoscope have been confirmed previously in pediatric patients. Although most murmur referrals in healthy children >1 year of age do not reveal any cardiac pathology, the ability to efficiently evaluate or follow patients using validated remote examination tools may improve access to care, triage, and reduce the burden of travel for patients and their families. , Introduction in the 1980s of electronic stethoscopes attempted to improve sound amplification and filtration with recent implementation of noise removal algorithms capable of cancelling internally and externally derived extraneous noises likely to interfere with lower amplitude murmurs. The level of agreement with the novel stethoscope and the standard electronic device was high in our study. Given that there still remains some debate around breath sound terminology on auscultation, future definitions will influence the level of interobserver agreement for any digital device used. There are some basic practical differences between the novel and other digital stethoscopes. As an example, the One Digital Stethoscope (Thinklabs Medical LLC) has no application interface and requires the attachment of separate connectors to the audio channels, with hand manipulation of the audio filter. The hardware connections of this system have the potential to degrade the audio quality, whereas the Tyto system (which has all of its hardware in-built along with embedded filtration software) appears to have a superior sound quality and is much easier to use due to the touch screen interface. There was high rating of the images and a high reliability with the novel device tympanic membrane diagnosis when compared with those of stand-alone digital otoscope. Our findings are in keeping with similarly reported rates using other digital equipment of enhanced visualization over conventional microscopy, with similar levels of reported diagnostic inadequacy using other digital equipment. Some of the conditions limiting the use of the digital otoscope may be device related but many issues are nondevice related (such as insufficient visualization of the tympanic membrane and/or occlusion of the ear canal with cerumen). The literature is somewhat confusing since studies are heterogeneous in their reporting of either incomplete drum visualization or lack of diagnosis when there is excess cerumen. The stand-alone digital otoscope studied provides high-resolution imagery; however, the field of view of this instrument is limited as is the distance of functioning of the automatic zoom. This invariably necessitates some manual focusing of the image. By comparison, the novel device permits a full field view of the ear canal and the tympanic membrane with preliminary white balancing of the image at the commencement of the examination and automated in-built imaging focus during the procedure. Overall, the data concerning the efficacy of the use of digital technologies in pediatric clinical management are somewhat difficult to interpret, principally because of system variations. This may in part, however, be obviated in the future after the establishment of network groups such as the Health Experts online at Portsmouth (HELP) system in 2014, the Supporting Pediatric Research on Outcomes and Utilization of Telehealth (SPROUT) established in 2015, and consensus decision reporting for the classification of recorded lung sounds. , The validation of diagnostic digital technology in the clinic may also provide a repository of quality sounds and images in a virtual library, which may be used for training purposes. There are many issues that impact the use of telemedicine in pediatric populations. Those issues have included reimbursement, licensure, bandwidth, electronic medical record integration, credentialing, technology choices, consumer demand, and practice guidelines. Equally important are concerns that care outside the context of the primary or specialty medical home, particularly when patients and their families are seeking “direct to consumer” services that may fragment care and may not favorably compare with the standard-of-care in-person visit. In support of the use of telemedicine for pediatric populations, this study demonstrates that remote examination tools can provide high-quality data that can inform telehealth examinations. Such data may permit more widespread screening for medical conditions of childhood warranting medical attention. In summary, the novel device (Tyto) met both of the articulated study aims. In the first instance, the novel device performed better than stand-alone digital examination devices utilized in our telehealth program. In the second instance, use of the novel device resulted in lower rates of diagnostic failure with high intra- and inter-reliability for examination of the heart, lungs, and ears. In our study, the device was managed by a registered nurse with basic training in its use; there is great potential for use of the novel device by parents at home or personnel in a school or day-care facility to collect the relevant data for transmission to a remotely located clinician to inform clinical decision-making and in particular wherever possible within the context of the medical home. This approach may augment care of children with special needs or medical fragility.
9ee122fe-7334-4c58-8918-f34bb45d71b7
11605167
Microbiology[mh]
INTRODUCTION Approximately two thirds of households in the United States, totaling around 85 million homes, own at least one pet (Acuff et al., ). The global pet food market exceeds US$122 billion, with the U.S. pet food market estimated at around US$ 50 billion in 2021 (Wall, ). Consumers spent a total of US$100 billion on pet‐related expenditures in 2021 in the United States alone (Thiel, ). With increasing trends in the humanization and premiumization of pets, pet owners’ contact with companion animals, including pets, has become an important source of human pathogens within the household (Ehuwa et al., ). Humanization of pets refers to the trend of treating pets like humans and family members. Salmonella , Listeria monocytogenes , and Shiga toxin‐producing Escherichia coli (STEC) have all been found in many types of pet foods—dry and wet, which can potentially cross‐contaminate food contact surfaces, utensils, storage areas, and refrigerators, leading to an increased risk for human transmission of these infectious agents (Weese & Rousseau, ). Pet foods are usually labeled ‘not safe for human consumption’ as they do not have the same microbial safety specifications as human foods. However, due to the close bonds between humans and pets, pet foods are often handled with bare hands, and humans tend to co‐mingle with pets, with some children even taking bites of pet foods (Balachandran et al., ). The ingredients used in pet foods come from diverse sources, including by‐products from human‐grade food production systems. The lack of postprocessing pathogen mitigation strategies, the tendency for bulk and loose marketing of pet treats (Adley et al., ), and a lack of validated pathogen reduction steps during raw pet food production make these products even more vulnerable to postprocess contamination and cross‐contamination (PFI, ). Raw pet foods encompass the most significant section of pet foods with reported Salmonella contamination (Table ), thus neither the Food and Drug Administration (FDA) or the Center for Disease Control and Prevention (CDC) recommend feeding raw diets to pets (FDA, ; CDC, ). Pet foods are typically classified based on their moisture content: dry, semimoist, and wet (canned) pet food with moisture contents of 5%–12%, 22%–35%, and more than 65%, respectively (Hu, ). These foods are processed differently. For example, dry pet food is extruded at high temperatures and pressure; semimoist products are thermally treated and generally make use of humectants for preservations; canned pet foods are commercially sterilized and sealed according to U.S. 21 CR part 113; and raw pet foods are sold and served without any pathogen kill step. Therefore, the prevalence and the route of contamination of each type of food also vary. Human‐adapted Salmonella serotypes are pathogenic to dogs and cats. However, acute clinical cases of salmonellosis are rare in these animals (Sanchez et al., ). When clinical cases are seen, they are often associated with exposure to high bacterial loads in puppies and kittens, in which enteritis is common (Sanchez et al., ). Salmonella could get transmitted from a carrier pet to humans via both direct and indirect routes. The median infectious dose of Salmonella in dogs and cats is higher than in humans, which is ca . 1000 infectious bacteria (Public Health Agency of Canada, ; Sanchez et al., ). In a multilaboratory survey conducted among dogs and cats between 2012 and 2014 in 36 U.S. states, Reimschuessel et al. reported that < 1% (3/542) of cats and 2.5% (60/2,422) of dogs being positive for Salmonella with 55% of the positive dogs presenting diarrhea. In a separate report by Ellis and Sanchez , the prevalence of Salmonella in healthy dogs and cats was reported to range from 1%–36% and 1%–18%, respectively. Salmonella prevalence in pet foods in the United States has been estimated at 0 to 44% in dry pet foods (Pace et al., ), 7% to 44.4% in raw pet foods (Jones et al., ; Strohmeyer et al., ), and 12.5% to 41% in pet treats (Li et al., ; White et al., ). However, the relatively older finding of the high (44%, n = 11/25) prevalence of Salmonella by Pace et al. is mostly likely due to the faulty batch of one manufacturer, wherein the researchers reported all 11 positive samples were from one specific manufacturer among samples from four different manufacturers tested. A recent study has shown just 0.42% (1/240) of Salmonella prevalence in dry pet foods (Nemser et al., ). Healthy dogs that are fed Salmonella ‐contaminated pet food may shed Salmonella in their feces and saliva for up to 7 days (British Small Animal Veterinary Association (BSAVA), ; Verma et al., ). With the establishment of a so‐called zero‐tolerance policy, Salmonella is considered an adulterant in pet foods (FDA, ). As the number of pet owners continues to rise, so does the demand for commercial pet foods. However, limited information is available on the prevalence and risk mitigation strategies of foodborne pathogens, particularly Salmonella , in these products. Therefore, this review summarizes and critically analyzes published information on the prevalence, sources of contamination, transmission routes, conventional and novel antimicrobial intervention strategies, and the regulatory framework around Salmonella in pet food safety in the United States. REVIEW METHODOLOGY A systematized literature search was conducted on Google Scholar, Scopus, and PubMed. The inclusion criteria for the search were as follows: (a) any scientific research article or official reports about Salmonella in pet foods, (b) mitigation strategies against Salmonella in pet foods, (c) reported results on Salmonella outbreaks, and (d) investigation on pet food safety. The keywords used for the literature search included “ Salmonella ,” “pet food,” “dog food,” “cat food,” “outbreaks,” “recall,” “pet food safety,” “pathogen control,” “human salmonellosis,” “raw pet foods,” “prevalence,” “sources,” “‘FDA,” and “USDA.” Documents in English were retrieved, and articles were assessed for their relevance based on title and abstract, regardless of publication year. With few exceptions, reports were published between 2000 and 2024. PREVALENCE OF SALMONELLA CONTAMINATION IN PET FOOD In general, pet foods and pet treats have higher Salmonella prevalence than other animal feed. This could be explained by the fact that animal‐derived ingredients constitute around 60% (w/w) of pet foods compared with only 2% (w/w) in finished animal feeds (Brookes, ; Hendriks et al., ). This is further supported by findings from the U.S. FDA Center for Veterinary Medicine in 1994, where Salmonella prevalence in animal‐derived ingredients was higher (82%) than that of plant‐derived ingredients (37%) (McChesney, ). Similarly, a study conducted by Li et al. indicated that 66.1% and 41.3% of animal‐derived ingredients between 2002–2006 and 2007–2008, respectively, were positive for Salmonella . Pet foods, as a possible source of Salmonella , were recognized as early as 1955, when the pathogen contaminated 26.5% of 98 dehydrated dog meal samples (Galton et al., ). From 1955 to 2024, more studies were conducted to evaluate the presence of Salmonella in pet foods, with prevalence ranging from 0 to 80% (Table ). Although there are many factors contributing to higher prevalence rates, including the number of samples and method of detection, the type of pet foods and the processing treatment appear to be the primary influencers. Table summarizes the available scientific literature with research studies on the prevalence of Salmonella in different categories of pet foods. 3.1 Prevalence of Salmonella in dry pet foods Dry pet foods, also referred to as kibble, are often sold in sealed packages or containers and are typically subjected to a high thermal and pressure treatment that eliminates microorganisms of public health concern and/or reduces them to acceptable levels (Lambertini et al., ). However, there is no standardized postprocessing pathogen mitigation step in dry pet food production (Bianchini et al., , ; Lambertini et al., ), making postprocessing steps the main entry points for pathogens in extruded dry pet food. Once contaminated, Salmonella enterica serovar Typhimurium was known to survive for up to six months in dry pet food kibble stored at room temperature (Adelantado et al., ). Another study reported that it can survive for up to 19 months in dry pet food kibble (Lambertini et al., ). Unlike raw pet foods or treats, Salmonella contamination in dry pet foods is not common due to the high‐temperature treatment of the raw ingredients. Several studies in the United States, Canada, South America, and Europe, with sample size varying from 24 to 36, have analyzed dry pet food and were not able to detect Salmonella in their samples (D’ Aoust, 1978; Kazimierska et al., ; Strohmeyer et al., ). Similarly, in a recently published study from the UK, Morgan et al. could not detect Salmonella from the tested commercial extruded or cooked pet food kibbles (0/24). However, in a relatively large sample‐size study conducted in Poland by Wojdat et al. , who examined 2271 dry food samples, 22 of them (0.97%) were positive for Salmonella . Similarly, Nemser et al. examined dry pet food in the United States and identified a relatively lower prevalence of Salmonella , 0.41% (1/240). Contrastingly, in other studies, such as Pace et al. , in the United States, a higher prevalence (11/25, 44%) of Salmonella in dry pet foods was detected. However, it is worth noting that the pet food samples tested in the study of Pace et al. were collected following the infection of a 2.5‐month‐old girl with S . Serovar Havana back in 1976, and that ultimately was linked to dehydrated dog food. Similarly, in a recent prevalence study conducted in Lebanon by Serhan et al. , 64% (42/66) of the dry pet food tested were presumptively positive for Salmonella . The study tested the samples by culture method, where samples were first selectively enriched in TT and RV broths followed by plating on selective agar (XLD agar), and the presence of black colonies was reported as presumptive positive (hereafter called culture method). The purchased commercial pet food samples in this study originated from different countries which may have gone through long transport, storage, and handling. Not performing the confirmatory test for Salmonella presumptive samples was a drawback of this study. 3.2 Prevalence of Salmonella in semimoist pet foods Semimoist pet foods are convenient to feed and are palatable, so they are a popular product. The raw materials used in semimoist pet foods, such as meat and animal by‐products, as well as inadequate processing, cross‐contamination, and improper handling and storage conditions, are some of the contributing factors for pathogens entry. The high‐moisture content in semimoist pet foods provides an environment conducive to bacterial and fungal growth. If the product's water activity is not controlled or if it is not properly maintained, pathogens can proliferate. However, semimoist pet foods present a lower risk of Salmonella contamination as two studies have documented zero prevalence of said pathogen. As part of a study by the Veterinary Laboratory Investigation and Response Network (Vet‐LIRN), USA, Nemser et al. analyzed pouch‐packaged semimoist foods and reported zero prevalence (0/240) of Salmonella . Ribeiro‐Almeida et al. were also not able to detect Salmonella in commercial semimoist pet foods they have investigated (0/4). 3.3 Prevalence of Salmonella in wet pet foods Generally, canned pet foods contain high‐moisture content, which classifies them as wet pet foods. In the case of canned pet foods, they are commercially sterilized and sealed according to U.S. 21 CFR art 113 in hermetically sealed containers (FDA, ). The application of heat and the aseptic process prevents the survival and growth of microorganisms, including Salmonella . Canned pet foods were examined for Salmonella by D'Aoust , Wojdat et al. , and Strohmeyer et al. , and were not able to detect any out of 29, 18, and 24 samples tested, respectively. In a study conducted in Manchester, UK (Barrell, ), Salmonella prevalence in cooked open pet foods was reported as high as 26% (26/99). Similarly, in a study by Serhan et al. in Lebanon, 99 canned foods were examined, and they isolated presumptive Salmonella colonies from 26 (26%) samples. However, although the former conducted serotyping, the latter did not conduct the confirmatory test of the colonies, leaving a space to assume that the true Salmonella ‐positive samples may be lower. This is in contrast with a recent study from Portugal, where 22 cooked wet pet food samples were tested for Salmonella , and all of them were reported negative (Ribeiro‐Almeida et al., ). 3.4 Prevalence of Salmonella pet treats and chews Dogs and cats are often provided with treats in addition to their basic foods. These treats and chews are considered complementary products (Kepinska‐Pacelik & Biel, ). In particular, dogs have conditional cravings for biting, and chews are provided to prevent them from damaging home furniture and appliances. These treats and chews often contain animal by‐products or animal‐derived products. Some examples of such products are beef jerky, animal ears, trachea, tendons, masseters, fish meal, blood meal, and animal fat (Kepinska‐Pacelik & Biel, ). Generally, after manufacturing, pet treats undergo a dehydrating step to reduce and bring the moisture content to a desirable level, making it unlikely for Salmonella and other pathogens to grow (Lambertini et al., ) unless the product is abused in terms of high moisture or temperature. Often, such animal‐origin chews and treats are not processed and are frequently sold as open and loose in bulk bins (Adley et al., ), making them more prone to pathogenic contamination. Unlike regular pet foods, pet treats are not served in the food bowls nor delivered using scoops or spoons. Treats are held by bare hands (direct human contact), posing another level of threat from pathogen transmission amongst the handlers who are unaware of the possible health risks associated with the contaminated pet treats. A higher prevalence of Salmonella in pet treats has been reported in several studies. Clark et al. conducted a nationwide survey analyzing 94 pig ears and 39 pet treats in Canada and reported 51% (48/94) and 38% (15/39) Salmonella prevalence, respectively. Another study in the United States by White et al. explored 26 domestic and 132 imported pet treats and reported 41% (65/158) Salmonella ‐positive samples. It is important to note here that both investigations sprung from the incidence of S . Infantis infections in humans in Alberta, which was later associated with pig ears as pet treats. Similarly, Adley et al. tested 102 pet treats from Limerick City, Ireland, and found 24.5% (25/102) Salmonella ‐positive samples using the culture method and a higher positive rate of 28.4% (29/102) upon PCR confirmation. Notably, in this study, all the positive samples originated from a single distributor. However, the authors did not clarify whether the samples were from the same batch or from different batches. It is likely that if the positive samples were from one particular batch from one single distributor, it could be a contamination issue. On the other hand, in a study conducted in Brazil by Galvao et al. , only 0.93% (1/108) of the pet treats were Salmonella positive. However, the limitation of this study was that these samples were obtained from only one supplier that produced pet treats for export. The low Salmonella prevalence in these products could be linked to good manufacturing practices and strict implementation of microbiological quality parameters, among others. Li et al. conducted surveillance of finished feeds, feed ingredients, supplements for pets, pet foods, and pet treats to monitor the trend of Salmonella contamination in animal feeds over different periods. It was observed that the prevalence of Salmonella in pet foods and pet treats both declined from 13% and 12.3% in 2002–2006 to 9.8% and 4.8% in 2007–2009, respectively. Although the pet food samples were not defined in more detail, the results showed that there is a higher Salmonella prevalence in pet foods than in pet treats. Similarly, in a study by the Vet‐LIRN, Salmonella was not detected in pet treats (0/190) (Nemser et al., ). In a recent study by Morgan et al. , 16% (13/84) of commercially available dry pet treats in the UK were positive for Salmonella . When the treats were traced, they found some of the treats were unpackaged with no label, some were individually wrapped, some were delivered in boxes as loose treats, and some in clear plastic bags without labels. 3.5 Prevalence of Salmonella in raw pet food/raw meat‐based diets Pet food is considered raw when it contains meat, bones, organs, and/or eggs, sometimes with vegetables that have not been cooked or treated for safety (PFI, ). However, because no forms of cooking are employed in raw pet food, nonthermal interventions such as the freeze‐drying process and high‐pressure processing (HPP) may be considered. Despite the disagreement between pet owners and veterinarians in terms of nutrition and public health (Freeman & Michel, ; LeJeune & Hancock, ; Turnbull, ), there is a rising appeal of raw meat‐based diets (RMBD) among pet owners as anecdotal reports on this type of pet food showcase it as a natural diet with potential health benefits for pets (Nuesch‐Inderbinen et al., ). Pet owners may believe that nonprocessed meat‐based diets are healthier and natural choices for their pets (Morgan et al., ). It is estimated that about 15% to 25% of dogs and 10% of cats are regularly fed RMBD (Stogdale, ). However, another study claimed that approximately 60% of pet owners feed their pets completely or partially RMBD (Ahmed et al., ). RMBD are also commonly fed among racing greyhounds and sled dogs (LeJeune & Hancock, 2001). However, the health benefits of the RMBD are not scientifically supported, and a serious food safety concern exists due to the natural microbiological loads of said products. RMBD can be prepared in various forms, including frozen, fresh, or freeze‐dried options. Commercial raw pet foods are mostly made from a combination of raw meat (beef, chicken, duck, lamb, rabbit, veal, venison, etc.) and offal (hearts, liver, gizzards, etc.), fruits, vegetables, grain, eggs, etc. (Hoelzer et al., ; Nuesch‐Inderbinen et al., ; Raditic, ). All of these products are known to be vehicles for Salmonella transmission (Freeman et al., ). For example, 45% of commercial raw meat diets used in pet foods fed to greyhound were S . Typhimurium–positive (Chengappa et al., ). Due to the public health threat raw pet food poses, numerous studies have been conducted to evaluate the microbiological quality of RMBD. We identified 23 Salmonella prevalence studies in raw pet foods dating back to 2002, with the majority of them published between 2012–2023. Seven studies focused on analyzing RMBD specifically designed for dogs, whereas the remaining studies examined RMBD intended for pets in general. The prevalence of Salmonella in raw pet foods greatly varied from 0–80%. The U.S. FDA cautions the public from feeding raw pet food diets due to Salmonella and other associated pathogens (FDA, ). Mehlenbacher et al. performed a study on frozen, dehydrated, and freeze‐dried raw pet foods purchased locally in Minneapolis and St. Paul area in Minnesota, USA. They reported 7% (4/60) positive for Salmonella serovars 12:i:‐, Montevideo, Kentucky, and Anatum. It was also found that 52% of samples (31/60) were subjected to treatment such as dehydration, freeze‐drying, or HPP, and Salmonella was detected in unprocessed samples. It is worth noting here that all the serovars isolated were multidrug‐resistant (MDR). Similarly, two different studies with larger sample sizes were conducted in the United States, one in Colorado by Strohmeyer et al. and the other at a multistate level by Nemser et al. . They respectively found a similar Salmonella prevalence rate of 7.08% (17/240) and 7.65% (15/196) in commercial raw pet foods. The former used raw meat diets composed of beef, lamb, chicken, or turkey meat produced by seven manufacturers. In contrast, the latter collected commercial feed samples from different states within the United States and processed them in six different laboratories. In a separate study, Cancio (2022) analyzed selected raw pet foods in the United States. They recorded presumptive Salmonella colonies in 33.8% (22/65) of the RMBD, most of which were blends of skeletal muscles, offal, and edible bones. In a study conducted in Canada, Finley et al. reported that approximately 21% of the raw pet food diets are positive for Salmonella . Joffe and Schlesinger investigated homemade raw pet food in Canada and reported 8/10 (80%) samples as Salmonella ‐positive. The researchers also reported that 3/10 (30%) dogs fed the contaminated raw pet food shed the Salmonella serovars in their stool. Another study in Canada, which analyzed 25 raw pet foods (24 frozen, 1 freeze‐dried) originating from eight different manufacturers, identified 20% (5/25) of the samples as Salmonella ‐positive (Weese et al., ). Finley et al. reported a much higher Salmonella prevalence of 21% (35/166) among RMBD sold in Canada. Most of the samples ( n = 161) had poultry meat as the main ingredient or as one of the two meat ingredients. Similarly, in a study conducted in the Netherlands, 20% (7/35) of commercial raw pet foods representing eight brands from 14 retailers were positive for Salmonella (van Bree et al., ). In contrast, a study conducted on commercial raw pet foods composed of domestic beef (43%), poultry (41%), and pork (27%) in Finland observed a relatively lower Salmonella prevalence of 2% (2/88) (Fredriksson‐Ahomaa et al., ). In an interesting finding by Morelli et al. in Italy, none of the 29 raw pet food samples tested were positive for Salmonella . The raw pet foods were laboratory‐manufactured (raw ingredients were purchased, and raw pet food was formulated in the laboratory) utilizing meat from beef, turkey, chicken, horse, lamb, salmon, horse, and duck. Considering the composition of raw meat, it is unusual that the researchers could not detect a single positive sample despite the culture, biochemical tests, and serology methods for Salmonella detection. Studies in Sweden, Switzerland, and Italy also showed a lower Salmonella prevalence of 7%, 3.9%, and 7.14%, respectively, when utilizing culture, biochemical, and/or serological testing methods (Bottari et al., ; Hellgren et al., ; Nuesch‐Inderbinen et al., ). Most of the studies on raw pet food were conducted in the United States, Canada, and Europe, possibly due to the dense pet population and premium pet care trends. Studies on the prevalence of Salmonella in RMBDs in Asia and South America are also available. In Thailand, commercial raw pet foods belonging to 12 brands (15 frozen and 2 freeze‐dried) were tested for Salmonella prevalence and found that 53% (9/17) of the frozen and freeze‐dried raw pet food was positive for Salmonella using enzyme‐linked fluorescent assay technology (Kananub et al., ). Similarly, Yukawa et al. in Japan investigated 60 commercial raw pet food samples from six different brands from the Okayama and Osaka regions and reported the presence of Salmonella in 12% (7/60) of the samples. The serovars isolated were Infantis, Typhimurium, and Schwarzengrund, and many of them were MDR S . Infantis, which is an emerging concern in the poultry industry in the United States and Europe. In Chile, Solis et al. tested 31 commercial and 11 homemade raw pet foods (RMBD) and reported Salmonella in 11/42 (26.2%) of the samples. In the study, chicken meat was the main ingredient in 6 of the 11 samples that were positive for Salmonella . Finally, a recent survey by Morgan et al. reported that 4.5% (5/110) of pre‐prepared raw pet food diets were Salmonella positive. Prevalence of Salmonella in dry pet foods Dry pet foods, also referred to as kibble, are often sold in sealed packages or containers and are typically subjected to a high thermal and pressure treatment that eliminates microorganisms of public health concern and/or reduces them to acceptable levels (Lambertini et al., ). However, there is no standardized postprocessing pathogen mitigation step in dry pet food production (Bianchini et al., , ; Lambertini et al., ), making postprocessing steps the main entry points for pathogens in extruded dry pet food. Once contaminated, Salmonella enterica serovar Typhimurium was known to survive for up to six months in dry pet food kibble stored at room temperature (Adelantado et al., ). Another study reported that it can survive for up to 19 months in dry pet food kibble (Lambertini et al., ). Unlike raw pet foods or treats, Salmonella contamination in dry pet foods is not common due to the high‐temperature treatment of the raw ingredients. Several studies in the United States, Canada, South America, and Europe, with sample size varying from 24 to 36, have analyzed dry pet food and were not able to detect Salmonella in their samples (D’ Aoust, 1978; Kazimierska et al., ; Strohmeyer et al., ). Similarly, in a recently published study from the UK, Morgan et al. could not detect Salmonella from the tested commercial extruded or cooked pet food kibbles (0/24). However, in a relatively large sample‐size study conducted in Poland by Wojdat et al. , who examined 2271 dry food samples, 22 of them (0.97%) were positive for Salmonella . Similarly, Nemser et al. examined dry pet food in the United States and identified a relatively lower prevalence of Salmonella , 0.41% (1/240). Contrastingly, in other studies, such as Pace et al. , in the United States, a higher prevalence (11/25, 44%) of Salmonella in dry pet foods was detected. However, it is worth noting that the pet food samples tested in the study of Pace et al. were collected following the infection of a 2.5‐month‐old girl with S . Serovar Havana back in 1976, and that ultimately was linked to dehydrated dog food. Similarly, in a recent prevalence study conducted in Lebanon by Serhan et al. , 64% (42/66) of the dry pet food tested were presumptively positive for Salmonella . The study tested the samples by culture method, where samples were first selectively enriched in TT and RV broths followed by plating on selective agar (XLD agar), and the presence of black colonies was reported as presumptive positive (hereafter called culture method). The purchased commercial pet food samples in this study originated from different countries which may have gone through long transport, storage, and handling. Not performing the confirmatory test for Salmonella presumptive samples was a drawback of this study. Prevalence of Salmonella in semimoist pet foods Semimoist pet foods are convenient to feed and are palatable, so they are a popular product. The raw materials used in semimoist pet foods, such as meat and animal by‐products, as well as inadequate processing, cross‐contamination, and improper handling and storage conditions, are some of the contributing factors for pathogens entry. The high‐moisture content in semimoist pet foods provides an environment conducive to bacterial and fungal growth. If the product's water activity is not controlled or if it is not properly maintained, pathogens can proliferate. However, semimoist pet foods present a lower risk of Salmonella contamination as two studies have documented zero prevalence of said pathogen. As part of a study by the Veterinary Laboratory Investigation and Response Network (Vet‐LIRN), USA, Nemser et al. analyzed pouch‐packaged semimoist foods and reported zero prevalence (0/240) of Salmonella . Ribeiro‐Almeida et al. were also not able to detect Salmonella in commercial semimoist pet foods they have investigated (0/4). Prevalence of Salmonella in wet pet foods Generally, canned pet foods contain high‐moisture content, which classifies them as wet pet foods. In the case of canned pet foods, they are commercially sterilized and sealed according to U.S. 21 CFR art 113 in hermetically sealed containers (FDA, ). The application of heat and the aseptic process prevents the survival and growth of microorganisms, including Salmonella . Canned pet foods were examined for Salmonella by D'Aoust , Wojdat et al. , and Strohmeyer et al. , and were not able to detect any out of 29, 18, and 24 samples tested, respectively. In a study conducted in Manchester, UK (Barrell, ), Salmonella prevalence in cooked open pet foods was reported as high as 26% (26/99). Similarly, in a study by Serhan et al. in Lebanon, 99 canned foods were examined, and they isolated presumptive Salmonella colonies from 26 (26%) samples. However, although the former conducted serotyping, the latter did not conduct the confirmatory test of the colonies, leaving a space to assume that the true Salmonella ‐positive samples may be lower. This is in contrast with a recent study from Portugal, where 22 cooked wet pet food samples were tested for Salmonella , and all of them were reported negative (Ribeiro‐Almeida et al., ). Prevalence of Salmonella pet treats and chews Dogs and cats are often provided with treats in addition to their basic foods. These treats and chews are considered complementary products (Kepinska‐Pacelik & Biel, ). In particular, dogs have conditional cravings for biting, and chews are provided to prevent them from damaging home furniture and appliances. These treats and chews often contain animal by‐products or animal‐derived products. Some examples of such products are beef jerky, animal ears, trachea, tendons, masseters, fish meal, blood meal, and animal fat (Kepinska‐Pacelik & Biel, ). Generally, after manufacturing, pet treats undergo a dehydrating step to reduce and bring the moisture content to a desirable level, making it unlikely for Salmonella and other pathogens to grow (Lambertini et al., ) unless the product is abused in terms of high moisture or temperature. Often, such animal‐origin chews and treats are not processed and are frequently sold as open and loose in bulk bins (Adley et al., ), making them more prone to pathogenic contamination. Unlike regular pet foods, pet treats are not served in the food bowls nor delivered using scoops or spoons. Treats are held by bare hands (direct human contact), posing another level of threat from pathogen transmission amongst the handlers who are unaware of the possible health risks associated with the contaminated pet treats. A higher prevalence of Salmonella in pet treats has been reported in several studies. Clark et al. conducted a nationwide survey analyzing 94 pig ears and 39 pet treats in Canada and reported 51% (48/94) and 38% (15/39) Salmonella prevalence, respectively. Another study in the United States by White et al. explored 26 domestic and 132 imported pet treats and reported 41% (65/158) Salmonella ‐positive samples. It is important to note here that both investigations sprung from the incidence of S . Infantis infections in humans in Alberta, which was later associated with pig ears as pet treats. Similarly, Adley et al. tested 102 pet treats from Limerick City, Ireland, and found 24.5% (25/102) Salmonella ‐positive samples using the culture method and a higher positive rate of 28.4% (29/102) upon PCR confirmation. Notably, in this study, all the positive samples originated from a single distributor. However, the authors did not clarify whether the samples were from the same batch or from different batches. It is likely that if the positive samples were from one particular batch from one single distributor, it could be a contamination issue. On the other hand, in a study conducted in Brazil by Galvao et al. , only 0.93% (1/108) of the pet treats were Salmonella positive. However, the limitation of this study was that these samples were obtained from only one supplier that produced pet treats for export. The low Salmonella prevalence in these products could be linked to good manufacturing practices and strict implementation of microbiological quality parameters, among others. Li et al. conducted surveillance of finished feeds, feed ingredients, supplements for pets, pet foods, and pet treats to monitor the trend of Salmonella contamination in animal feeds over different periods. It was observed that the prevalence of Salmonella in pet foods and pet treats both declined from 13% and 12.3% in 2002–2006 to 9.8% and 4.8% in 2007–2009, respectively. Although the pet food samples were not defined in more detail, the results showed that there is a higher Salmonella prevalence in pet foods than in pet treats. Similarly, in a study by the Vet‐LIRN, Salmonella was not detected in pet treats (0/190) (Nemser et al., ). In a recent study by Morgan et al. , 16% (13/84) of commercially available dry pet treats in the UK were positive for Salmonella . When the treats were traced, they found some of the treats were unpackaged with no label, some were individually wrapped, some were delivered in boxes as loose treats, and some in clear plastic bags without labels. Prevalence of Salmonella in raw pet food/raw meat‐based diets Pet food is considered raw when it contains meat, bones, organs, and/or eggs, sometimes with vegetables that have not been cooked or treated for safety (PFI, ). However, because no forms of cooking are employed in raw pet food, nonthermal interventions such as the freeze‐drying process and high‐pressure processing (HPP) may be considered. Despite the disagreement between pet owners and veterinarians in terms of nutrition and public health (Freeman & Michel, ; LeJeune & Hancock, ; Turnbull, ), there is a rising appeal of raw meat‐based diets (RMBD) among pet owners as anecdotal reports on this type of pet food showcase it as a natural diet with potential health benefits for pets (Nuesch‐Inderbinen et al., ). Pet owners may believe that nonprocessed meat‐based diets are healthier and natural choices for their pets (Morgan et al., ). It is estimated that about 15% to 25% of dogs and 10% of cats are regularly fed RMBD (Stogdale, ). However, another study claimed that approximately 60% of pet owners feed their pets completely or partially RMBD (Ahmed et al., ). RMBD are also commonly fed among racing greyhounds and sled dogs (LeJeune & Hancock, 2001). However, the health benefits of the RMBD are not scientifically supported, and a serious food safety concern exists due to the natural microbiological loads of said products. RMBD can be prepared in various forms, including frozen, fresh, or freeze‐dried options. Commercial raw pet foods are mostly made from a combination of raw meat (beef, chicken, duck, lamb, rabbit, veal, venison, etc.) and offal (hearts, liver, gizzards, etc.), fruits, vegetables, grain, eggs, etc. (Hoelzer et al., ; Nuesch‐Inderbinen et al., ; Raditic, ). All of these products are known to be vehicles for Salmonella transmission (Freeman et al., ). For example, 45% of commercial raw meat diets used in pet foods fed to greyhound were S . Typhimurium–positive (Chengappa et al., ). Due to the public health threat raw pet food poses, numerous studies have been conducted to evaluate the microbiological quality of RMBD. We identified 23 Salmonella prevalence studies in raw pet foods dating back to 2002, with the majority of them published between 2012–2023. Seven studies focused on analyzing RMBD specifically designed for dogs, whereas the remaining studies examined RMBD intended for pets in general. The prevalence of Salmonella in raw pet foods greatly varied from 0–80%. The U.S. FDA cautions the public from feeding raw pet food diets due to Salmonella and other associated pathogens (FDA, ). Mehlenbacher et al. performed a study on frozen, dehydrated, and freeze‐dried raw pet foods purchased locally in Minneapolis and St. Paul area in Minnesota, USA. They reported 7% (4/60) positive for Salmonella serovars 12:i:‐, Montevideo, Kentucky, and Anatum. It was also found that 52% of samples (31/60) were subjected to treatment such as dehydration, freeze‐drying, or HPP, and Salmonella was detected in unprocessed samples. It is worth noting here that all the serovars isolated were multidrug‐resistant (MDR). Similarly, two different studies with larger sample sizes were conducted in the United States, one in Colorado by Strohmeyer et al. and the other at a multistate level by Nemser et al. . They respectively found a similar Salmonella prevalence rate of 7.08% (17/240) and 7.65% (15/196) in commercial raw pet foods. The former used raw meat diets composed of beef, lamb, chicken, or turkey meat produced by seven manufacturers. In contrast, the latter collected commercial feed samples from different states within the United States and processed them in six different laboratories. In a separate study, Cancio (2022) analyzed selected raw pet foods in the United States. They recorded presumptive Salmonella colonies in 33.8% (22/65) of the RMBD, most of which were blends of skeletal muscles, offal, and edible bones. In a study conducted in Canada, Finley et al. reported that approximately 21% of the raw pet food diets are positive for Salmonella . Joffe and Schlesinger investigated homemade raw pet food in Canada and reported 8/10 (80%) samples as Salmonella ‐positive. The researchers also reported that 3/10 (30%) dogs fed the contaminated raw pet food shed the Salmonella serovars in their stool. Another study in Canada, which analyzed 25 raw pet foods (24 frozen, 1 freeze‐dried) originating from eight different manufacturers, identified 20% (5/25) of the samples as Salmonella ‐positive (Weese et al., ). Finley et al. reported a much higher Salmonella prevalence of 21% (35/166) among RMBD sold in Canada. Most of the samples ( n = 161) had poultry meat as the main ingredient or as one of the two meat ingredients. Similarly, in a study conducted in the Netherlands, 20% (7/35) of commercial raw pet foods representing eight brands from 14 retailers were positive for Salmonella (van Bree et al., ). In contrast, a study conducted on commercial raw pet foods composed of domestic beef (43%), poultry (41%), and pork (27%) in Finland observed a relatively lower Salmonella prevalence of 2% (2/88) (Fredriksson‐Ahomaa et al., ). In an interesting finding by Morelli et al. in Italy, none of the 29 raw pet food samples tested were positive for Salmonella . The raw pet foods were laboratory‐manufactured (raw ingredients were purchased, and raw pet food was formulated in the laboratory) utilizing meat from beef, turkey, chicken, horse, lamb, salmon, horse, and duck. Considering the composition of raw meat, it is unusual that the researchers could not detect a single positive sample despite the culture, biochemical tests, and serology methods for Salmonella detection. Studies in Sweden, Switzerland, and Italy also showed a lower Salmonella prevalence of 7%, 3.9%, and 7.14%, respectively, when utilizing culture, biochemical, and/or serological testing methods (Bottari et al., ; Hellgren et al., ; Nuesch‐Inderbinen et al., ). Most of the studies on raw pet food were conducted in the United States, Canada, and Europe, possibly due to the dense pet population and premium pet care trends. Studies on the prevalence of Salmonella in RMBDs in Asia and South America are also available. In Thailand, commercial raw pet foods belonging to 12 brands (15 frozen and 2 freeze‐dried) were tested for Salmonella prevalence and found that 53% (9/17) of the frozen and freeze‐dried raw pet food was positive for Salmonella using enzyme‐linked fluorescent assay technology (Kananub et al., ). Similarly, Yukawa et al. in Japan investigated 60 commercial raw pet food samples from six different brands from the Okayama and Osaka regions and reported the presence of Salmonella in 12% (7/60) of the samples. The serovars isolated were Infantis, Typhimurium, and Schwarzengrund, and many of them were MDR S . Infantis, which is an emerging concern in the poultry industry in the United States and Europe. In Chile, Solis et al. tested 31 commercial and 11 homemade raw pet foods (RMBD) and reported Salmonella in 11/42 (26.2%) of the samples. In the study, chicken meat was the main ingredient in 6 of the 11 samples that were positive for Salmonella . Finally, a recent survey by Morgan et al. reported that 4.5% (5/110) of pre‐prepared raw pet food diets were Salmonella positive. SOURCES OF SALMONELLA CONTAMINATION IN PET FOODS 4.1 Ingredients and raw materials Because Salmonella spp. can be found in dust, soil, rodents, livestock, animal housings, and farm and agriculture products such as grains and meat ingredients, it can easily gain access to and contaminate the pet food production and supply chain at multiple points (Figure ). Contaminated food ingredients are one of the major sources of Salmonella in final pet food products, especially in RMBD. The most common ingredient categories in pet foods include poultry, meat‐ and plant‐based products, additives, enzymes, rendered fat and oils, vitamins, and others (AAFCO, ). The RMBD usually contains skeletal muscle, fat, cartilage, internal organs, and bones of farm animals (poultry, pork, and ruminants), horses, game, and fish (Fredricksson‐Ahomaa et al., 2017). There is an increased risk for contamination in RMBD, given that the raw materials do not generally involve a heat processing or kill step. Rendered meat and rendered animal products are commonly used in pet foods. Some animal products undergo rendering, a process where raw animal tissues are subjected to heat application, moisture extraction, and fat separation (Meeker & Hamilton, ). The animal protein by‐products could be meat and bone meal, meat meal, blood meal, poultry by‐product meal, poultry meal, feather meal, fish meal, etc. In a study by Kinley et al. , Salmonella was detected in 8.7% of the 150 meal samples from various rendering companies across the United States. A year‐long study by Jiang also observed Salmonella in 8.3% (731/8,783) of the analyzed samples of rendered products across the United States and Canada. Animal offal and a variety of meats are other major components of pet food products. Offal is also known as a variety of meat which excludes muscle of bones, and mostly comprises internal organs, such as the heart, liver, kidney, tongue, gizzards, etc. In a study in the United States, 59.4% (148/249) of chicken liver, a major component in raw pet food, was found Salmonella positive in samples from retail stores in Delaware, New Jersey, and Pennsylvania (Jung et al., ). In Egypt, 13.88%, 11.11%, and 6.25% of chicken gizzard, liver, and breast, respectively, tested positive for Salmonella (Abd El‐Aziz, ). A higher prevalence of Salmonella in chicken giblets in samples from Thailand and Ethiopia was recorded at 86% (190/221) and 42% (24/57), respectively (Boniphace, ; Jerngklinchan et al., ). 4.2 Processing environment The pet food processing environment itself can harbor and become a continuous source of Salmonella . Cross‐contamination can occur if surfaces, equipment, or utensils come into contact with Salmonella ‐contaminated materials or when employees handle food without proper sanitary protocol. If not cleaned and sanitized adequately and regularly, processing equipment (grinders, mixers, conveyors, etc.), floors, and surfaces can become a niche for Salmonella , for example, biofilm formation and maturation leading to continuous shedding of Salmonella . Cracks, crevices, and hard‐to‐reach areas of equipment provide a conducive environment for Salmonella to form a biofilm, which, once mature and ruptured, becomes a regular source of contamination in the supply chain. Salmonella exposed to improper or sublethal sanitizing agents are known to be a higher biofilm former (Dhakal et al., ). There have been reports that Salmonella outbreaks in pet foods were linked to contaminated processing environments. One prominent case happened in 2007 when contaminated dry pet foods led to 79 cases of S . Schwarzengrund infection in humans. During the investigation, the Pennsylvania Department of Health eventually traced the source to the processing environment in the enrobing and flavoring room of the manufacturing plant (Behravesh et al., ). Similarly, another notable pet food‐related outbreak was that of S . Infantis in 2012, leading to 49 cases in 20 U.S. states and Canada. During the investigation, dry dog foods were produced in a single pet food production facility in South Carolina, which was linked to the outbreak (CDC, ). Further, the outbreak strain of S . Infantis was also isolated from a pet owner's opened pet food bag and unopened dry dog food from retail stores. The U.S. FDA investigation found a plant‐wide contamination in the manufacturing facility, leading to one of the largest pet food recalls in recent history. The inspectors pointed out various shortcomings in the facility, such as failure in the provision of washing and sanitizing facilities within the plant, breakdown in Sanitation Standard Operating Procedures (SSOP) and preventive maintenance program implementation, and inadequate steps to ensure that processing procedures would not contribute to contamination. Potential sources of contamination in a processing facility include birds (feces and feathers) entering via air vents, traffic patterns, pests, rodents, etc. (Carrion & Thompson, ). In facilities where pet food is stored, pests like insects and rodents may access improperly sealed containers or packaging, leading to contamination of the contents with Salmonella (Leiva et al., ). They can also get access to the food during transportation if it is not properly sealed and protected. Finally, temperature and relative humidity in the processing environment are critical factors to restrict microbial growth in the environment and raw materials. High temperature and humidity may favor bacterial and mold growth. Similarly, moist floors and environments support bacterial growth and proliferation, putting the food production chain at a higher contamination risk. 4.3 Postprocess contamination Considering the high temperature and pressure extrusion as a potent physical antimicrobial intervention, the Salmonella presence in dry pet foods is almost always attributed to postprocessing contamination. The incorporation of flavor and coating agents is suspected to be the major step for pathogen recontamination (Dhakal et al., ). Although it was not specified, in the S . Schwarzengrund outbreak (2008–2012) in humans mentioned above, the materials sprayed in the finished products were suspected to be contaminated with Salmonella (KuKanich, ). The lack of standardized food safety protocols and testing methods on the final products could contribute to a delay in identifying the source of contamination and conducting root‐cause analysis to prevent recurrence. Additionally, packaging and handling could become a potential source of Salmonella contamination for semimoist and dry pet foods. Because there is no further kill step after the coating and drying in dry pet food, downstream operations such as oil and fat coating and flavor addition could be a major point of contamination in dry extruded pet foods. One of the notable cases of postprocessing contamination or faulty handling leading to Salmonella contamination in raw turkey pet products was linked to the 2017–2019 outbreak of S . Reading. Upon investigation, the Iowa Department of Health concluded that the turkey products were improperly prepared and handled, ignoring the USDA FSIS guidelines, and were not held at the appropriate temperature to prevent pathogen growth (Hassan et al., ). Compliance with Good Manufacturing Practices (GMP), appropriate process controls and safety plans, proper personal hygiene, and the use of proper packaging material are other vital considerations to ensure the quality and safety of pet foods. GMPs are fundamental to pet food safety, and their proper implementation is the basis of risk management. Any Salmonella prevalence in the finished product is indicative of a deviation in their food safety system (Leiva et al., ). Cross‐contamination can also occur during storing and handling of pet food in retail spaces. Pet treats, for example pig ears, are often sold loose in bulk bins in pet shops. Pig ears sold in bulk bins in Ireland were found positive for Salmonella (Adley et al., ). A study by Finley et al. in Canada reported that natural pet treats were sold in bulk bins without any packaging material or instructions available to buyers, posing a threat of external contamination via birds, pests, and other vectors. Therefore, to minimize external contamination, pet treats are recommended to be individually packaged and irradiated (KuKanich, ) Ingredients and raw materials Because Salmonella spp. can be found in dust, soil, rodents, livestock, animal housings, and farm and agriculture products such as grains and meat ingredients, it can easily gain access to and contaminate the pet food production and supply chain at multiple points (Figure ). Contaminated food ingredients are one of the major sources of Salmonella in final pet food products, especially in RMBD. The most common ingredient categories in pet foods include poultry, meat‐ and plant‐based products, additives, enzymes, rendered fat and oils, vitamins, and others (AAFCO, ). The RMBD usually contains skeletal muscle, fat, cartilage, internal organs, and bones of farm animals (poultry, pork, and ruminants), horses, game, and fish (Fredricksson‐Ahomaa et al., 2017). There is an increased risk for contamination in RMBD, given that the raw materials do not generally involve a heat processing or kill step. Rendered meat and rendered animal products are commonly used in pet foods. Some animal products undergo rendering, a process where raw animal tissues are subjected to heat application, moisture extraction, and fat separation (Meeker & Hamilton, ). The animal protein by‐products could be meat and bone meal, meat meal, blood meal, poultry by‐product meal, poultry meal, feather meal, fish meal, etc. In a study by Kinley et al. , Salmonella was detected in 8.7% of the 150 meal samples from various rendering companies across the United States. A year‐long study by Jiang also observed Salmonella in 8.3% (731/8,783) of the analyzed samples of rendered products across the United States and Canada. Animal offal and a variety of meats are other major components of pet food products. Offal is also known as a variety of meat which excludes muscle of bones, and mostly comprises internal organs, such as the heart, liver, kidney, tongue, gizzards, etc. In a study in the United States, 59.4% (148/249) of chicken liver, a major component in raw pet food, was found Salmonella positive in samples from retail stores in Delaware, New Jersey, and Pennsylvania (Jung et al., ). In Egypt, 13.88%, 11.11%, and 6.25% of chicken gizzard, liver, and breast, respectively, tested positive for Salmonella (Abd El‐Aziz, ). A higher prevalence of Salmonella in chicken giblets in samples from Thailand and Ethiopia was recorded at 86% (190/221) and 42% (24/57), respectively (Boniphace, ; Jerngklinchan et al., ). Processing environment The pet food processing environment itself can harbor and become a continuous source of Salmonella . Cross‐contamination can occur if surfaces, equipment, or utensils come into contact with Salmonella ‐contaminated materials or when employees handle food without proper sanitary protocol. If not cleaned and sanitized adequately and regularly, processing equipment (grinders, mixers, conveyors, etc.), floors, and surfaces can become a niche for Salmonella , for example, biofilm formation and maturation leading to continuous shedding of Salmonella . Cracks, crevices, and hard‐to‐reach areas of equipment provide a conducive environment for Salmonella to form a biofilm, which, once mature and ruptured, becomes a regular source of contamination in the supply chain. Salmonella exposed to improper or sublethal sanitizing agents are known to be a higher biofilm former (Dhakal et al., ). There have been reports that Salmonella outbreaks in pet foods were linked to contaminated processing environments. One prominent case happened in 2007 when contaminated dry pet foods led to 79 cases of S . Schwarzengrund infection in humans. During the investigation, the Pennsylvania Department of Health eventually traced the source to the processing environment in the enrobing and flavoring room of the manufacturing plant (Behravesh et al., ). Similarly, another notable pet food‐related outbreak was that of S . Infantis in 2012, leading to 49 cases in 20 U.S. states and Canada. During the investigation, dry dog foods were produced in a single pet food production facility in South Carolina, which was linked to the outbreak (CDC, ). Further, the outbreak strain of S . Infantis was also isolated from a pet owner's opened pet food bag and unopened dry dog food from retail stores. The U.S. FDA investigation found a plant‐wide contamination in the manufacturing facility, leading to one of the largest pet food recalls in recent history. The inspectors pointed out various shortcomings in the facility, such as failure in the provision of washing and sanitizing facilities within the plant, breakdown in Sanitation Standard Operating Procedures (SSOP) and preventive maintenance program implementation, and inadequate steps to ensure that processing procedures would not contribute to contamination. Potential sources of contamination in a processing facility include birds (feces and feathers) entering via air vents, traffic patterns, pests, rodents, etc. (Carrion & Thompson, ). In facilities where pet food is stored, pests like insects and rodents may access improperly sealed containers or packaging, leading to contamination of the contents with Salmonella (Leiva et al., ). They can also get access to the food during transportation if it is not properly sealed and protected. Finally, temperature and relative humidity in the processing environment are critical factors to restrict microbial growth in the environment and raw materials. High temperature and humidity may favor bacterial and mold growth. Similarly, moist floors and environments support bacterial growth and proliferation, putting the food production chain at a higher contamination risk. Postprocess contamination Considering the high temperature and pressure extrusion as a potent physical antimicrobial intervention, the Salmonella presence in dry pet foods is almost always attributed to postprocessing contamination. The incorporation of flavor and coating agents is suspected to be the major step for pathogen recontamination (Dhakal et al., ). Although it was not specified, in the S . Schwarzengrund outbreak (2008–2012) in humans mentioned above, the materials sprayed in the finished products were suspected to be contaminated with Salmonella (KuKanich, ). The lack of standardized food safety protocols and testing methods on the final products could contribute to a delay in identifying the source of contamination and conducting root‐cause analysis to prevent recurrence. Additionally, packaging and handling could become a potential source of Salmonella contamination for semimoist and dry pet foods. Because there is no further kill step after the coating and drying in dry pet food, downstream operations such as oil and fat coating and flavor addition could be a major point of contamination in dry extruded pet foods. One of the notable cases of postprocessing contamination or faulty handling leading to Salmonella contamination in raw turkey pet products was linked to the 2017–2019 outbreak of S . Reading. Upon investigation, the Iowa Department of Health concluded that the turkey products were improperly prepared and handled, ignoring the USDA FSIS guidelines, and were not held at the appropriate temperature to prevent pathogen growth (Hassan et al., ). Compliance with Good Manufacturing Practices (GMP), appropriate process controls and safety plans, proper personal hygiene, and the use of proper packaging material are other vital considerations to ensure the quality and safety of pet foods. GMPs are fundamental to pet food safety, and their proper implementation is the basis of risk management. Any Salmonella prevalence in the finished product is indicative of a deviation in their food safety system (Leiva et al., ). Cross‐contamination can also occur during storing and handling of pet food in retail spaces. Pet treats, for example pig ears, are often sold loose in bulk bins in pet shops. Pig ears sold in bulk bins in Ireland were found positive for Salmonella (Adley et al., ). A study by Finley et al. in Canada reported that natural pet treats were sold in bulk bins without any packaging material or instructions available to buyers, posing a threat of external contamination via birds, pests, and other vectors. Therefore, to minimize external contamination, pet treats are recommended to be individually packaged and irradiated (KuKanich, ) TRANSMISSION FROM PETS TO HUMANS Typically, young animals exhibit higher susceptibility to enteric‐type infections, with the potential for the infection to progress to a systemic level in more severe cases. In contrast, adult animals are more prone to having asymptomatic infections (Carrion & Thompson, ). Pets with healthy immune systems or those infected with low infectious doses of the organisms, such as that from contaminated pet foods, usually remain asymptomatic or experience only mild, temporary illnesses. Salmonella , a known zoonotic organism can be transmitted from animals to humans and vice‐versa. Transmission from pets to humans is mostly through the handling of contaminated pet foods or by contact with carrier pets. Occasionally, there are cases where Salmonella ‐contaminated feed leads to severe illnesses in pets. For example, two septicemic cats presented at the Tifton Veterinary Diagnostic and Investigational Laboratory (TVDIL) at the University of Georgia's College of Veterinary Medicine were found to be fed a raw beef diet contaminated with S . Newport (Stiver et al., ). At the lower level of Salmonella , pets act as carriers but could become shedder at home. 5.1 Household interactions with pets The increasing trend of humanization of pets and the close relationship between pets and their owner exposes humans to pathogens, such as Salmonella . Companion animals spend much of their lives indoors and in intimate contact with their owners. Household interactions, such as direct and/or indirect contact with pets, direct and/or indirect contact with contaminated pet food, unsafe pet food handling, and/or pet feces are some of the potential routes that pet owners can acquire Salmonella from pets (Lambertini et al., ). Figure shows some of the common ways humans can acquire Salmonella from their pets. After consumption, pet food comes in contact with the pet's jowls, whiskers, nose, mouth, and tongue. Pet owners generally do not clean pets’ mouths and faces after they eat. When pets come in direct contact with the owners, including children and the elderly, play, sleep with, hug, and lick, they could potentially transmit foodborne pathogens to the owners. Additionally, it is not unusual that kids and toddlers at home often nibble and eat pet foods from their bowls while playing with pets. A four‐month‐old infant in Japan with symptoms of diarrhea was diagnosed with S . Virchow in his stool. Upon investigation, the infectious Salmonella serovar was traced back to their household dog, which was a carrier for the pathogen (Sato et al., ). Another example of contact with contaminated pet food leading to human infection was reported by Hassan et al. , where two household children were infected with S . Reading, which was sourced back to contaminated raw turkey pet food. Pets that are fed raw pet food diets tend to shed increased pathogen levels in their feces (Weese et al., ), and RMBD is considered a risk factor for fecal carriage of Salmonella by pets (Kaindama et al., ). Indirect contact, such as interactions within a common environment between humans and pets, within the environment where the pet lives, or items used by the pet, such as their toys, food bowls, and grooming tools, could also lead to human salmonellosis. Additionally, this could also be due to unsafe pet food handling, such as improper storage, not cleaning the pet food bowls before and after feeding, and inadequate handwashing and sanitizing during meal preparation. Weese and Rousseau tested Salmonella recovery and survival from common household plastic and stainless‐steel feeding bowls after adding 2 g of meat inoculated with a 5‐log S . Copenhagen. The inoculated foods were wiped with a gloved hand, leaving a thin layer of residue, and bowls were allowed to dry at room temperature for 1 h. Following this, the bowls were cleaned using warm water, rinse, soap, bleach, dishwater, etc. The findings were a little unusual that none of the methods, even scrubbing followed by bleach soap, was effective in removing total Salmonella from the bowls, with 33% of stainless‐steel bowls and 50% of plastic bowls positive for Salmonella . Meanwhile, with both warm water rinse and rinse followed by a scrub, 100% of the stainless‐steel bowl and 92% of the plastic bowl were still positive for Salmonella . Similarly, pet feces could also be a potential source of contamination, especially when handling animal waste and if fecal shedding frequently occurs inside the household. When pets consume Salmonella ‐contaminated foods, they tend to become carriers. Finley et al. studied the risk associated with feeding Salmonella ‐contaminated commercial raw food diets and found out that seven of the 16 exposed dogs shed Salmonella after 1 to 7 days of consumption. The dogs fed Salmonella ‐free diets did not shed Salmonella in feces. Additionally, chances of Salmonella in fecal shedding are higher in dogs fed with a raw meat diet as compared with conventional diets (Runesvard et al., ; Viegas et al., ). Therefore, household interaction with carrier pets poses a threat of salmonellosis to humans. 5.2 Population at risk The 2023 human salmonellosis outbreak from a Texas manufacturer's dry pet food was the prime example of how the young and infants are the most vulnerable population, where 86% of the reported ill population were children one year of age or younger (CDC, ). Humans can easily acquire pathogens either from handling contaminated pet foods or from pets. For immunocompromised individuals, salmonellosis was also noted as one of the common zoonotic diseases that can be acquired from pets (Hemsworth & Pizer, ). Although the Centers for Disease Control and Prevention recommend washing hands for 20 seconds after handling pet foods and keeping away children below five years from pet foods, treats, and supplements, the extent of adherence of pet owners cannot be ascertained. Feeding RMBDs to pets is also not recommended by the FDA and CDC (FDA, ; CDC, ). These agencies warn of the danger of such practice but also provide some recommended measures to follow if the owners opt to feed RMBDs. Household interactions with pets The increasing trend of humanization of pets and the close relationship between pets and their owner exposes humans to pathogens, such as Salmonella . Companion animals spend much of their lives indoors and in intimate contact with their owners. Household interactions, such as direct and/or indirect contact with pets, direct and/or indirect contact with contaminated pet food, unsafe pet food handling, and/or pet feces are some of the potential routes that pet owners can acquire Salmonella from pets (Lambertini et al., ). Figure shows some of the common ways humans can acquire Salmonella from their pets. After consumption, pet food comes in contact with the pet's jowls, whiskers, nose, mouth, and tongue. Pet owners generally do not clean pets’ mouths and faces after they eat. When pets come in direct contact with the owners, including children and the elderly, play, sleep with, hug, and lick, they could potentially transmit foodborne pathogens to the owners. Additionally, it is not unusual that kids and toddlers at home often nibble and eat pet foods from their bowls while playing with pets. A four‐month‐old infant in Japan with symptoms of diarrhea was diagnosed with S . Virchow in his stool. Upon investigation, the infectious Salmonella serovar was traced back to their household dog, which was a carrier for the pathogen (Sato et al., ). Another example of contact with contaminated pet food leading to human infection was reported by Hassan et al. , where two household children were infected with S . Reading, which was sourced back to contaminated raw turkey pet food. Pets that are fed raw pet food diets tend to shed increased pathogen levels in their feces (Weese et al., ), and RMBD is considered a risk factor for fecal carriage of Salmonella by pets (Kaindama et al., ). Indirect contact, such as interactions within a common environment between humans and pets, within the environment where the pet lives, or items used by the pet, such as their toys, food bowls, and grooming tools, could also lead to human salmonellosis. Additionally, this could also be due to unsafe pet food handling, such as improper storage, not cleaning the pet food bowls before and after feeding, and inadequate handwashing and sanitizing during meal preparation. Weese and Rousseau tested Salmonella recovery and survival from common household plastic and stainless‐steel feeding bowls after adding 2 g of meat inoculated with a 5‐log S . Copenhagen. The inoculated foods were wiped with a gloved hand, leaving a thin layer of residue, and bowls were allowed to dry at room temperature for 1 h. Following this, the bowls were cleaned using warm water, rinse, soap, bleach, dishwater, etc. The findings were a little unusual that none of the methods, even scrubbing followed by bleach soap, was effective in removing total Salmonella from the bowls, with 33% of stainless‐steel bowls and 50% of plastic bowls positive for Salmonella . Meanwhile, with both warm water rinse and rinse followed by a scrub, 100% of the stainless‐steel bowl and 92% of the plastic bowl were still positive for Salmonella . Similarly, pet feces could also be a potential source of contamination, especially when handling animal waste and if fecal shedding frequently occurs inside the household. When pets consume Salmonella ‐contaminated foods, they tend to become carriers. Finley et al. studied the risk associated with feeding Salmonella ‐contaminated commercial raw food diets and found out that seven of the 16 exposed dogs shed Salmonella after 1 to 7 days of consumption. The dogs fed Salmonella ‐free diets did not shed Salmonella in feces. Additionally, chances of Salmonella in fecal shedding are higher in dogs fed with a raw meat diet as compared with conventional diets (Runesvard et al., ; Viegas et al., ). Therefore, household interaction with carrier pets poses a threat of salmonellosis to humans. Population at risk The 2023 human salmonellosis outbreak from a Texas manufacturer's dry pet food was the prime example of how the young and infants are the most vulnerable population, where 86% of the reported ill population were children one year of age or younger (CDC, ). Humans can easily acquire pathogens either from handling contaminated pet foods or from pets. For immunocompromised individuals, salmonellosis was also noted as one of the common zoonotic diseases that can be acquired from pets (Hemsworth & Pizer, ). Although the Centers for Disease Control and Prevention recommend washing hands for 20 seconds after handling pet foods and keeping away children below five years from pet foods, treats, and supplements, the extent of adherence of pet owners cannot be ascertained. Feeding RMBDs to pets is also not recommended by the FDA and CDC (FDA, ; CDC, ). These agencies warn of the danger of such practice but also provide some recommended measures to follow if the owners opt to feed RMBDs. PET FOOD RECALLS AND SALMONELLA OUTBREAKS LINKED TO PET FOODS Once introduced, Salmonella can persist within the pet food matrix due to its ability to survive in low‐moisture environments and resist typical processing conditions (Finn et al., ). Any potential cases of Salmonella contamination in pet foods that may or may not lead to an outbreak or human cases are investigated by the U.S. FDA in collaboration with the U.S. CDC and State Departments of Agriculture. The confirmed or suspected Salmonella contamination is usually followed by voluntary recalls by the manufacturers. Between 2003 and 2022, 859 Salmonella ‐linked recalls were associated with pet food and constituted 24% of the total pet food recalls, and 85% of total bacterial pathogen‐linked pet food recalls (Debeer et al., ). It should be noted here that these recall data include food ingredient recalls, not just pet foods. Timely recalls help minimize the possibility of human illnesses. Table provides a comprehensive and up‐to‐date list of pet food recalls that are contaminated or suspected to be contaminated with Salmonella . Additionally, the table identifies any pet food linked to Salmonella outbreaks in human outbreaks. Among human cases of Salmonella outbreaks associated with pet food, pig ear dog treats associated with a 1999 outbreak in Canada are noteworthy. Thirty dog owners, many of whom were children, handled and/or fed the treats to their dogs and tested positive for S . Infantis. Follow‐up studies after this outbreak revealed that pig ears were frequently associated with Salmonella (Clark et al., ; White et al., ). Subsequently, in 2002 in Calgary, Canada, pet treats sold by a Texas‐based company were associated with outbreaks of S . Newport PT 14 in humans. A total of five human cases were reported, including one 1‐month‐old infant (Finley et al., ). The Salmonella serovar in this outbreak was traced back to commercial pet treats, and all the households identified as positive were reported to have fed the same sourced pet treats—dried‐up beef patties. In this outbreak, it was the pet owners who got infected and not the pet animals, as their fecal samples showed negative results (Pitout et al., ), highlighting the risk associated with the handling of contaminated pet food. A 2005 human outbreak of S . Thompson was linked back to frozen raw beef and salmon in pet treats for cats and dogs (Finley et al., ). This led to a total of nine human‐culture–confirmed Salmonella cases in Washington state and Western Canada. Upon investigation, it was found that the dehydration temperature applied to the treats was not high enough to kill the pathogen, and no other pathogen‐killing steps were involved. All the infected people were reported to have handled pet food from a common source, and the oldest person infected was aged 81. Similarly, a notorious pet food‐linked Salmonella outbreak in humans in 2007–2008 was associated with dry dog and cat food originating from one of the Pennsylvania plants (Deasy et al., ). A total of 79 human cases in 21 states were reported positive for the S . Schwarzengrund. Upon investigation, the Salmonella serovar was isolated from one of the processing rooms in the plant. Based on the available data on the infected people, 39% of them were 1 year of age or younger. A dry dog food linked to the human S . Infantis outbreak in 2012 was reported to be associated with a manufacturing plant based in South Carolina. The outbreak caused 49 human illnesses: 47 in 20 U.S. states and two in Canada (FDA, ), and among 24 infected people with available information, 10 (42%) were hospitalized. However, the age‐based patient information was not available. In another pet treat associated with Salmonella outbreaks in humans from 2013, locally made jerky pet treats in New Hampshire led to a human outbreak that caused 43 illnesses with 16 (37%) hospitalizations (Cavallo et al., ). Among the infected patients, 69% were exposed to contaminated pet treats, and 95% of them claimed they were exposed to treat‐fed dogs. Upon investigation, the manufacturing site of the pet treats revealed inadequate processing and improper sanitary measures during production and packaging. Additionally, 78% (7/9) of environmental samples of the site were positive for the outbreak strain. Similarly, in 2018, contaminated raw ground turkey pet food was associated with human cases of Salmonella (FDA, ). The outbreak led to two human illness cases. However, further details about the infected patients were not provided. Further testing of the suspected turkey pet foods revealed the presence of the Salmonella spp. In yet another pet treat‐associated human Salmonella outbreak, pig ear pet treats from multiple brands were linked to a massive Salmonella outbreak in 2019 that led to 154 illnesses with 35 (26%) hospitalizations in 34 states (FDA, ). The affected people ranged from 1 to 90 years old. Twenty‐seven (19%) of the illnesses were among children younger than five years of age. Salmonella serovars Cerro, Derby, London, Infantis, Newport, Rissen, and I 4,[5],12:i:‐ were associated with the outbreaks. Notably, multiple MDR serovars were isolated from this outbreak; however, the details of the MDR serovars were not provided (CDC, ). The biggest and most severe Salmonella outbreak associated with pet food to date was reported in 2017–2019, which was linked to raw turkey products (CDC, ). A total of 358 people were infected with the outbreak strain of S . Reading in 42 states, causing 133 hospitalizations (44%) and one death. The age of the infected people ranged from 1–101 years, with 42 being the median. Four out of 200 people interviewed reported getting sick after feeding raw ground turkey to their pets, while the majority were reported to have been eating or preparing turkey. Out of the total isolates analyzed, 64% (314/587) were MDR. Upon investigation, the outbreak serovar was isolated from raw turkey products, raw turkey pet food, and live turkeys. Similarly, in 2023, dry pet food was associated with an outbreak of S . Kiambu (FDA, ). As per the latest update, this outbreak led to 7 illnesses and 1 hospitalization. However, the number of sick people in this outbreak was expected to be much higher. The alarming report from this outbreak was that 86% of the infected people were 1 year of age or younger, and the remaining 14% were 65 years and older. The most recent raw pet food‐linked MDR Salmonella outbreak in humans was associated with raw pet food and contact with cattle (Public Health Agency of Canada, ). The outbreak led to 44 illnesses and 13 hospitalizations, with the infected people ranging from 1 year to 91 years of age. A significant number of the cases (43%) were in children 5 years of age or younger. Another concerning finding from this outbreak was that the Salmonella strain I 4,[5],12:i:‐, associated with this outbreak, was extensively drug‐resistant, including to those commonly used human clinical medicine. Salmonella ‐linked pet food recalls between 1999 and June 2024 were categorized by the types of food: 33.3% were linked to pet treats, 29.1% were to dry pet food, 30% were to RMBD, 4.2% were to vitamins and supplements, and 1.7 % were to wet food (Figure ). However, when such recalls were analyzed between 2015 and 2024, 54% of the recalls were associated with RMBD, compared with 16% with pet treats, 18% with dry, and 4 % with supplements (Figure ). There seems to be a trend as to the type of pet food in human Salmonella outbreaks, which may be linked to the growing popularity of RMBD. Similarly, out of the total human Salmonella outbreaks associated with pet food types, 45.5% (5/11) were linked to pet treats, 27.3 % (3/11) were linked to RMBD, and 27.3 % (3/11) were linked to dry pet foods (Figure ). This could be correlated to the fact that RMBD and pet treats are minimally treated to mitigate pathogens. Finally, out of the total Salmonella ‐associated recalls in pet foods between 1999 and 2024, 9.5% of them were linked to human Salmonella outbreaks. What was more concerning was that 45.5% of those human outbreaks were linked to MDR Salmonella (Figure ), with one being extensively MDR (XDR). The next section defines the terms MDR and XDR. From 2021 to June 2024 alone, there were 21 Salmonella ‐linked pet food recalls in the United States, out of which 8 were linked to raw pet foods (Table ). Another notable observation from the above‐described outbreaks was that children and the elderly are the largest groups of people infected, highlighting the vulnerability of kids and elderly in pet food‐related outbreaks. The U.S. FDA follows a systematic process to handle pet food recalls. The FDA investigates reports of contamination, illness, and other safety concerns related to pet food in collaboration with state regulatory agencies and other stakeholders. If the FDA determines that a pet food product potentially contains pathogens, it will work closely with the manufacturer and may request the manufacturer to recall the product voluntarily. If a manufacturer refuses to initiate a voluntary recall, the FDA has the authority to order a mandatory recall under the Food Safety Modernization Act (FSMA). FDA oversees inspections of pet food manufacturers and suppliers of ingredients (excluding those regulated by the USDA FSIS) and conducts investigations in response to consumer complaints. However, it is often frowned upon amongst pet owners and pet food customers that the enforcement of these regulations is not consistently rigorous, and the regulatory body, the FDA, does a risk‐based inspection. The CDC collaborates with the FDA and USDA when contaminated pet food causes human illness solely as a public health agency without regulatory oversight. MDR SALMONELLA IN PET FOODS The use of antibiotics in animal agriculture, including the production of ingredients used in pet foods, can contribute to the development of antibiotic‐resistant strains of Salmonella . MDR nontyphoidal Salmonella was categorized under the ‘serious threat’ category by the U.S. CDC in 2019 (CDC, ). The use of antimicrobials in animal and pet food might have been linked to the emergence of MDR strains of bacteria. Multidrug resistance is defined by the lack of susceptibility of a pathogen to at least one antimicrobial agent in three or more distinct categories of antimicrobials. Whereas extensively drug‐resistant (XDR) refers to microorganisms that exhibit susceptibility restricted to no more than two categories of antimicrobial agents (McDermott et al., ; Magiorakos et al., ). Many prevalence studies and pet food recalls have documented MDR Salmonella in pet foods. The study by Wong et al. , as highlighted in Table , reported that S . London (ampicillin and gentamicin) and S . Infantis (nalidixic acid and streptomycin) isolated from pet chews were resistant to human antibiotics. Similarly, Finley et al. reported that several Salmonella isolates from raw pet foods from Mississauga, Calgary, and Guelph in Canada were resistant to 1 to 5 human antibiotics. In 2019, Salmonella serovar 4,5,12:i:‐ and Thompson isolated from dog treats were resistant to one or more antibiotics (Yukawa et al., ). A 2021 study by Degi et al. , who isolated 16 Salmonella serovars from cat feces reported all serovars [Typhimurium ( n = 4; 25%), Enteritidis ( n = 9; 56.3%), and Kentucky ( n = 3; 18.8%)] as MDR with strong resistance towards cefazolin, cefepime, ceftazidime, and ceftriaxone. Further, resistance against trimethoprim/sulfamethoxazole (11/16; 68.8%), ampicillin (10/16; 62.5%), ampicillin/sulbactam (9/16; 56.3%), gentamicin (9/16; 56.3%), nitrofurantoin (8/16; 50.0%), and amikacin (5/16; 31.3%) were also observed. In another set of studies by the same team, MDR Salmonella was isolated from raw pet foods in Japan, where S . Infantis was reported to be resistant to streptomycin, kanamycin, tetracycline, and trimethoprim, and S . Typhimurium to nalidixic acid, ciprofloxacin, and chloramphenicol (Yukawa et al., ). Despite several cases of isolated MDR Salmonella from dog food, the following section focuses on the pet food associated with MDR Salmonella , which was part of human outbreaks. Out of the total 212 isolates (110 from humans and 102 from pig ear pet treats), 164 (77%) were reported to possess some MDR in the 2019 salmonellosis outbreak that sickened 154 people. The isolates were resistant to many commonly used antibiotics such as amoxicillin‐clavulanic acid (<1%), ampicillin (53%), azithromycin (<1%), cefoxitin (<1%), ceftriaxone (<1%), chloramphenicol (33%), ciprofloxacin (50%), fosfomycin (2%), gentamicin (27%), kanamycin (2%), nalidixic acid (26%), streptomycin (33%), sulfisoxazole (30%), tetracycline (58%), and trimethoprim‐sulfamethoxazole (27%) (CDC, ). In another major MDR Salmonella outbreak in humans in the same year, raw products, raw turkey pet food, and live turkey were the source of Salmonella (CDC, ). Upon whole‐genome sequencing (WGS) of 487 S . Reading isolates, 314 isolates (64%) were reported to be resistant to ampicillin (52% of all 487 isolates), streptomycin (32%), sulfamethoxazole (31%), tetracycline (32%), kanamycin (3.4%), gentamicin (0.6%), nalidixic acid (0.4%), ciprofloxacin (0.4%), trimethoprim‐sulfamethoxazole (0.4%), and fosfomycin (0.2%). Similarly, an interesting finding from Charite University Hospital, Germany, reported a possible passing of MDR organisms from dogs and cats to their owners (Hackmann et al., ). The case–control study isolated MDR Salmonella from pet owners and linked it to their dogs and cats. In a separate study, commercial RMBDs in Japan were positive for MDR S . Infantis ( n = 3), S . Typhimurium ( n = 1), and S . Schwarzengrund ( n = 1). (Yukawa et al., ). All 5 isolates were susceptible to ampicillin, cefazolin, cefotaxime, and gentamycin; 2 isolates were resistant to >1 antibiotic; 1 S . Infantis was resistant to streptomycin, kanamycin, tetracycline, and trimethoprim, whereas the S . Typhimurium isolate was resistant to nalidixic acid, ciprofloxacin, and chloramphenicol and S . Schwarzengrund isolate was resistant to tetracycline. One of the latest MDR Salmonella isolated from human outbreaks associated with pet foods was linked to raw pet food and contact with cattle (Public Health Agency of Canada, ). Salmonella I 4,[5],12:i: isolated from the outbreak, was extensively drug‐resistant and was resistant to most of the commonly used antibiotics, such as ceftriaxone, azithromycin, trimethoprim/sulfamethoxazole, ampicillin, and ciprofloxacin. The strain was also resistant to older antibiotic drugs such as aminoglycosides, chloramphenicol, and tetracycline. SALMONELLA RISK MITIGATION STRATEGIES IN THE PET FOOD INDUSTRY Different intervention strategies are often applied to mitigate Salmonella contamination and encompass a range of preventive measures across the food production continuum. Aside from the conventional thermal lethality step during the extrusion processing of pet food, nonthermal processing and chemical and biological interventions are often used as integrated strategies to form a comprehensive approach to mitigating Salmonella contamination in pet foods. Table provides a detailed list of the most common intervention strategies in different pet food types to control Salmonella . 8.1 Physical interventions 8.1.1 High‐pressure processing Raw pet food manufacturers aim to maintain the ‘raw‐like’ attributes in their raw pet foods. Although traditional thermal pasteurization technology may negatively affect sensory characteristics, flavors, and nutritional contents of food, nonthermal processing technologies like HPP have attracted widespread attention from food industry practitioners. The HPP ensures microbial safety without the addition of preservatives and allows processed food to maintain the natural flavors and nutritional value of the original food material (Daryaei & Balasubramanian, ). In HPP technology, food is hermetically sealed in a flexible container under a high pressure of 100–600 MPa applied at room temperature using a liquid (typically water) as the pressure transfer medium, subjecting the interior and surface of the food to even pressure to achieve pasteurization (Balasubramaniam et al., ). Because the food is in packaged form and does not directly contact the processing devices, it prevents postprocessing contamination of food. This is critical in pet food manufacturing because postprocessing contamination is considered a key source of pathogen entry in pet foods. The HPP technology is commonly used to reduce pathogen levels and extend the shelf life of various human foods such as ready‐to‐eat foods, meat products, juices and beverages, seafood, and vegetable products. HPP‐related research, particularly targeting pet foods, is limited (Serra‐Castello et al., ). This could be partly because the food constituents used in pet foods are derived or diverted from human food manufacturing, and an intervention that is effective in human foods is, in general, also effective in pet foods. However, this technology is now increasingly adopted by pet food producers worldwide (Serra‐Castello et al., ). In the pet food industry, frozen raw and freeze‐dried pet foods and treats are commonly HPP treated. Hasty et al. studied the effectiveness of HPP (600 MPa for 8 min) in raw beef pet food inoculated with 7 logs of a Salmonella surrogate ( E. coli ATCC BAA 1427–31) and incubated at −23°C. After 24 h and 5 days of HPP treatment, log reductions of 4.9 logs and 6.2 logs were reported on selective agar. Similarly, Serra‐Castello et al. inoculated raw pet food consisting of chicken, vegetables, antioxidants, vitamins, and minerals with a three‐serovar cocktail of Salmonella spp. (Derby, Typhimurium, and Enteritidis). The frozen block of pet food was vacuum‐packed and subjected to HPP treatment. A maximum reduction of 9.33 log was observed at 750 MPa for 3.5 min at a pH of 6.09 ± 0.05. The same team investigated the effect of HPP against the same three serovars in raw pet food prepared from chicken, plant‐based ingredients, salmon, and spices according to the commercial recipe and stored at −20°C until use. A 9.08 log reduction of Salmonella was observed after both day 0 and day 14 from samples were stored at −18°C post‐HPP (750 MPa for 3.5 min) treatment (Serra‐Castello et al., ). Similarly, in another recent study by Lee et al. , three different raw pet food formulations with different levels of meat, organ meat, bone, seeds, fruits, and vegetables were inoculated with a 7‐log cocktail of 6 strains of Salmonella (3 of them were pet food isolates), treated with 586 MPa for 1 to 4 min and were either stored at 4°C or frozen at −10°C to −18°C for 21 days. Beef formulations were able to maintain the inactivation level above 5 logs after 586 MPa/2 min treatment and 1 day after storage and were maintained until the duration of the study. Not only does HPP act as point‐in‐time mitigation in pet foods, but this intervention is also known to extend the shelf life of pet foods. Neshovska et al. applied HPP (600 MPa for 3 min) to 210 raw pet food samples consisting of three different diet compositions, including animal and plant‐derived constituents, and evaluated shelf life and pathogen prevalence over time after incubating at refrigerated temperature. Samples were analyzed after 0, 15, and 30 days for Salmonella , E. coli , and aerobic bacteria. Although the aerobic plate count was above the acceptable range after 15 days, the E. coli and Salmonella were not detected until the study period ended, indicating that the HPP potentially extended the shelf life of raw pet foods. Different pathogens in raw pet foods tend to show different pressure resistance depending on species and strains. In general, Salmonella and L. monocytogenes strains displayed higher pressure resistance to HPP when compared with E. coli strains. The study also reported that the addition of lactic acid markedly enhanced the effectiveness of HPP against L. monocytogenes (Serra‐Castello et al., ) 8.1.2 Cold plasma treatment Cold plasma is a relatively new processing technology that uses an ionized gas consisting of neutral molecules, electrons, and positive and negative ions. It inactivates microbes via UV radiation and produces reactive chemical products of the cold plasma ionization process, including ozone, charged particles, reactive oxygen species (ROS), reactive nitrogen species, and free radicals (Hertrich et al., ). In‐package dielectric barrier discharge is one of the methods to generate cold plasma inside a confined food package, known as atmospheric cold plasma treatment. The ROS and charged particles possess tremendous potential to injure and inactivate several microorganisms like bacteria, fungi, and spores (Yadav & Roopesh, ). Cold plasma has been demonstrated to effectively reduce pathogens in several commodities, including seeds, fruits, vegetables, and pet treats (Hertrich et al., ; Yadav & Roopesh, ). Yadav and Roopesh studied the effect of cold plasma in freeze‐dried pet food treats inoculated with Salmonella . The surface inoculation of 8.2 log CFU/cm 2 Salmonella cocktail (Typhimurium and Senftenberg) was followed by modified atmospheric packaging and in‐package atmospheric cold plasma (APC) treatment. A 10 min APC followed by 7‐day storage at room temperature (21°C) successfully reduced Salmonella counts by 4.5 logs. However, to the knowledge of the authors, cold plasma technology is not being used in any commercial pet food industry as a pathogen mitigation method. 8.1.3 Pulsed light treatment Light‐emitting diodes (LEDs) are semiconductor diodes that use electroluminescence properties to produce light. High‐intensity light (in the ultraviolet wavelength range of 100–400 nm) pulses emitted from LEDs can reduce surface contamination in low‐moisture foods, including pet foods (Subedi & Roopesh, ). UV‐A light (320–400 nm) exposure causes bacterial cell death by generating ROS within the cell. Pulsed UV treatment is used for surface decontamination (Subedi & Roopesh, ) of fruits and vegetables, meat and poultry, and low‐moisture and high‐moisture foods. However, there has been limited consumer acceptance of the usage of pulsed UV LEDs. In a study by Subedi and Roopesh , the application of 395 nm LED treatment and 395 nm LED combined with vibration and mild hot air (50°C) caused a 1.2 and 2.26 log reduction in Salmonella spp. (Typhimurium and Senftenberg) levels in dry pet food pellets. In a separate study by Prasad et al. , dry pet food pellets with water activity of ca . 0.54 were inoculated with a five‐strain cocktail of Salmonella spp. (9 log CFU/mL) and air‐dried for 45–60 min. The pet food was treated at a 2 cm distance from the LED light source at spectra of 365 and 395 nm. The 395 nm LED treatment showed a significant Salmonella inactivation (1.76 log reduction) compared with 365 nm (0.79 log reduction). Similar to cold plasma, to our knowledge, pulse light treatment has not been used in any commercial pet food industry as a pathogen reduction step. 8.1.4 Irradiation Food irradiation is the process whereby foodstuffs are exposed to a source of ionizing radiation. The irradiation technology was approved in pet foods, pet treats (including pig ears), and chews in 2001 . Pet foods are exposed to sources of ionizing radiation, which can cause chemical changes. The approval from the FDA was obtained after a petition was filed to control the risk of Salmonella in pet foods, which was identified as a potential threat to pet owners, especially children. The radiation sources, depending on dosage, either destroy or render pathogens incapable of reproduction. There are very limited studies available on the use of irradiation to mitigate pathogens in pet food. Unfortunately, irradiation in pet foods is not widely appreciated by pet owners, especially after the 2007–2008 reports from Australia, where as many as 87 cats developed neurological symptoms and were suspected to be due to the feeding of irradiated dry pet food. The vitamin A depletion due to gamma irradiation was suggested to be a possible cause of neurologic symptoms in the affected cats (Child et al., ). However, in a study by Zhu et al. , the physiological effect of feeding irradiated pet foods to pet rats was studied and it reported that pet foods irradiated at 10, 15, and 25 kGy did not cause any abnormal physiological parameters as measured in terms of the general situation, food intake, food utilization rate, hematological parameters, biochemical parameters, viscera weight, histopathological reports, height, tail length, body temperature, heart rate, blood pressure, etc. when compared with the control groups. In one of the older studies published by Ley et al. , raw frozen (−15°C) meat (horsemeat, kangaroo, and veal) intended for use as pet food was gamma irradiated (0.6 Mrad) to reduce Salmonella spp. (Typhimurium, Senftenberg, Oranienburg, Anatum, Good, and Minnesota) levels by 5 logs. The study also reported that the postirradiation storage did not lead to the recovery of the irradiated Salmonella . In a study by Rana Raj , semimoist pet treats with 10%, 15%, and 25% moisture were treated with gamma irradiation at 2.0, 3.0, 4.0, 6.0, and 8.0 kGy and incubated at room temperature until 180 days. Samples were analyzed at 7, 15, 30, 45, 60, 75, 90, 120, 150, and 180 days. Nondetectable aerobic counts were reported in treats with 10% moisture treated with gamma irradiation doses of 6 and 8 kGy. When it comes to Salmonella , even the control treats were negative. Therefore, the irradiation was not indicative of Salmonella reduction in this study. Treatment with 4, 6, and 8 Gy of gamma radiation led to nondetectable coliform counts for treats with 10% and 15% moisture; however, in the past 30 days of storage, an increase in coliform counts was detected. In a separate study by Sethukali et al. , commercial semimoist pet foods were exposed to 2.5, 5.0, and 10.0 kGy of electron beam and X‐ray after inoculation with E. coli O157:H7 and S . Typhimurium. The microbiological evaluation conducted every 20 days showed that the pathogen reduction was better at a higher dosage (10 kGy). However, it also accelerated lipid oxidation and protein degradation compared with the lower dosage (5 kGy) of electron beams and X‐rays. In a recent study by Kakatkar et al. , a series of experiments were conducted wherein pet food kibble and powders composed of wheat, rice, and the fish by‐product from Pangasium bocourti were subjected to gamma irradiation at a dosage of 2.5 and 5 kGy. The investigation revealed a significant extension in shelf life, with 2.5 kGy treated samples exhibiting prolonged viability for 65 days compared with the control (28–35 days). When a higher dosage of 5 kGy of gamma radiation was administered, the total bacterial load remained below the detection limit (< 10 CFU/g) throughout the entirety of the 65‐day observation period. Conversely, in the untreated control samples, whereas Salmonella was notably absent, a measurable presence of coliform bacteria (≤ 20 CFU/g) and Staphylococcus aureus (ranging between 2.13 and 2.52 CFU/g) was detected. However, upon exposure to 2.5 kGy and 5 kGy of gamma irradiation, the pet food samples exhibited an absence of foodborne pathogens, including Salmonella . This underscores the efficacy of gamma irradiation as a means of eliminating potential Salmonella contamination in pet food products, thereby enhancing their safety and extending their shelf life. 8.2 Biological interventions 8.2.1 Bacteriophage Due to their specificity, environment‐friendly, and natural abundance, bacteriophages are becoming popular against pathogens in various human foods such as raw meat (Sharma et al., ), fresh produce (Lopez‐Cuevas et al., ), dairy (Phongtang et al., ), seafood (Xu et al., ), etc. However, limited research is available on the use of bacteriophages in pet food. The use of bacteriophages in dry foods in general and dry pet food specifically is even more limited, which could be due to the limited growth of bacteria in dry food, making it difficult to locate the phages, and secondly due to the restricted mobilization phage in dry foods. Heyse et al. explored the effectiveness of bacteriophages in mitigating Salmonella contamination in dried pet food. Pet food samples were inoculated with a cocktail mixture of S . Enteritidis, Montevideo, Senftenberg, and Typhimurium at ca . 6 logs, followed by thorough mixing to ensure uniform distribution of Salmonella . A surface spray of phage preparation to achieve final concentrations of 5, 6, and 7 log PFU/g followed by incubation at room temperature for 1 h led to the Salmonella reduction of 0.8, 1.4, and 2.0 log MPN/g, respectively. Similarly, Soffer et al. evaluated the efficacy of a cocktail bacteriophage consisting of 6 lytic monophages against a Salmonella cocktail of Enteritidis, Heidelberg, and Typhi in raw pet foods. Locally purchased raw pet food ingredients such as chicken, turkey, tuna, cantaloupe, and lettuce inoculated with Salmonella ( ca . 1500 CFU/g on chickens; ca . 1,250 CFU/g on turkey trim, ca . 2000 CFU/g on tuna/cantaloupe; and ca . 500 CFU/g on lettuce) followed by a 60‐min attachment time and bacteriophage application. The result showed up to 88%, 68%, 80, 92%, and 89% reduction in chicken, turkey, lettuce, tuna, and cantaloupe with 9×10 6 PFU/g of bacteriophage. In the case of turkey trim, ∼2×10 7 PFU/g of bacteriophage was able to cause an 86% reduction in Salmonella . The authors also evaluated the effect of bacteriophage‐treated dry pet food kibbles on pets by feeding it to cats and dogs and reported no deleterious side effects in pets. Bacteriophages have been commercially used in fresh pet food, for example, by a company Furchild Pet Nutrition, with a claim to have gained success. 8.3 Chemical interventions Physical interventions, which are point‐in‐time mitigation strategies, may lack carry‐over effects to prevent postprocessing contamination. Meanwhile, the application of chemical additives and antimicrobials usually has the potential to act against pathogens for longer durations (Huss et al., 2017). Therefore, different organic acids, acidulants, medium‐chain fatty acids, and plant‐derived antimicrobial additives are often applied in pet foods as pathogen mitigation interventions. 8.3.1 Liquid smoke Liquid smoke is a naturally derived flavor component and preservative used in human and pet foods, with known antimicrobial properties (Deliephan et al., ; Lingbeck et al., ). Liquid smoke is recognized as a Generally Recognized as Safe (GRAS) additive for human consumption by the U.S. FDA. In the food industry, liquid smoke fractions are used as flavoring agents, browning colorants, antioxidants, texture enhancers, and antimicrobial agents (Deliephan et al., ). To our knowledge, there is no published study evaluating liquid smoke as an antimicrobial against Salmonella in pet food. However, liquid smoke has been used as a flavoring ingredient in a wide range of pet food treats manufactured by major pet food companies, including Blue Buffalo, Purina, Smokehouse Pet Products, and Nutrish. Liquid smoke and its fractions containing phenols, carbonyls, and organic compounds have been found to be effective against pathogenic bacteria like L. monocytogenes and Staphylococcus aureus in meat and fish products (Lingbeck et al., ; Sunen et al., ). Though liquid smoke has not been commercially studied in mitigating Salmonella in pet food, it has shown antimicrobial activity against fungi in pet food and against Salmonella in other food matrices. Therefore, in addition to their use as flavoring agents, the potential use of liquid smoke as an antimicrobial agent in pet foods like dry kibble, semimoist treats, and RMBD needs to be determined. A study by Deliephan et al. evaluated the antifungal effects of liquid smoke fractions against Aspergillus flavus in semimoist pet food. Researchers have evaluated liquid smoke fractions in broth assays against Salmonella and have proved their inhibitory activity. Kim et al. reported the minimum inhibitory concentration (MIC) of liquid smoke from rice hull smoke condensate to be 0.822% against S . Typhimurium. Another study by Van Loo et al. evaluated four commercial smoke extracts for which the MICs ranged from 0.5%‐12% against S . Typhimurium. Although there is very limited published literature on the sensory palatability of liquid smoke by pets, it can be ascertained that liquid smoke has good application potential in pet foods due to the commercial availability of various kinds of smoke‐flavored pet treats by major pet food companies. 8.3.2 Organic acids and acidulants There are several organic acids and acidulants commonly used as processing aids or as ingredients in human and animal foods, including lactic acid, citric acid, propionic acid, phosphoric acid, acetic acid (and their salts), and sodium bisulfate (SBS). Most of the organic acids and acidulants are considered GRAS) additives by the U.S. FDA and hence do not have a daily maximum acceptable intake for humans or animals, which increases their applicability in foods. However, their dosage is limited by their negative impact on organoleptic and color attributes of food and meat products in many cases (Kiprotich & Aldrich, ). Nontraditional chicken products, such as hearts and livers, are commonly used in the manufacturing of pet foods and are becoming increasingly popular in RMBD (Cancio et al., ). Several studies have looked at Salmonella decontamination strategies for these products. Cancio et al. evaluate the use of peroxyacetic acid (PAA), buffered vinegar, and cultured dextrose fermentate to reduce Salmonella on artificially inoculated raw chicken livers intended to be used in pet foods. After immersion, there was a significant Salmonella reduction with all treatments, including the water control. More recently, Nakimera et al. evaluated the efficacy of a blend of citric acid and hydrochloric acid (CP), PAA, and sulfuric acid against Salmonella and mesophilic aerobic plate counts (APC) on chicken hearts and livers commonly used in pet food. All antimicrobials reduced Salmonella counts by more than one log, in contrast to the water control. The results of these studies demonstrate that Salmonella can be mitigated in raw poultry products intended for pet food production using processing aids that are already common in the meat industry. In the animal feed industry, chemical additives are often derived from blends of organic acids, such as 3‐Hydroxy‐3‐methylbutyrate (HMB), an organic acid available commercially in both free acid (HMBFA) and calcium salt (CaHMB) forms for use in animal feed. HMB functions as a metabolite of the essential amino acid leucine for animals and is recognized as GRAS by the U.S. FDA. Additionally, due to its organic acid properties, it also imparts antimicrobial effects when used in animal food and feed. A study by Huss et al. evaluated HMBFA or CaHMB as a coating on pet food kibble against Salmonella . 1.5% HMBFA reduced Salmonella counts by ∼ 4.9 logs in 1 day, whereas 1.5% CaHMB decreased Salmonella by ca .7.1 in 7 days. All HMBFA and CaHMB treatments reduced Salmonella counts to undetectable levels in 14 days. Deliephan et al. evaluated two commercial organic acid mixtures containing hydroxy‐4‐(methylthio) butanoic acid (HMTBa) at 2% and 1%, respectively, as a coating on kibble inoculated with Salmonella or E. coli O157:H7 (STEC). Salmonella counts were reduced by ca . 3 logs after 12 h and up to 4.6 logs after 24 h. STEC counts were also reduced by ca . 2 and 3 logs after 12 h and 24 h, respectively. Similarly, O'Bryan et al. evaluated a proprietary mixture of an organic acid blend consisting of 5%–25% nonanoic acid, 1%–25% butyric acid, and 1%–50% trans‐2‐hexenal on meat and bone meal (commonly used as a pet food ingredient) at 0, 1, 1.5, or 2 mL/kg of feed. Microbial analysis over time resulted in about 1 log reduction of Salmonella by 24 h and ca . 2 log reduction in 14 days. SBS is another GRAS acidulant approved for use as an additive in human and animal foods by the U.S. FDA. Due to its hygroscopicity and desiccant effect, SBS is found to be effective in killing pathogens such as Salmonella and Campylobacter (Dhakal & Aldrich, ; Line, ). SBS is commonly used in animal diets for the acidification of feline urine and for the preservation of soft‐moist treats and liquid digests. Dhakal et al. evaluated SBS, lactic acid, phosphoric acid, and combinations of butyric and propionic acids in rendered chicken fat (used to coat dry pet food kibbles) inoculated with Salmonella . SBS or lactic acid at 0.5% individually or a combination of SBS with propionic and butyric acid reduced Salmonella loads by >5.5 logs within 15 h in the chicken fat without negatively altering the shelf life of rendered fat (Dhakal et al., 2019). Dhakal and Aldrich evaluated the acidulants SBS, phosphoric acid, and lactic acid, individually and in combination with organic acids butyric and propionic acid in different fat types, namely, chicken fat, canola oil, Menhaden fish oil, lard, and tallow that are intended to coat dry pet food kibbles. The treated fats were inoculated with approximately 8 logs of Salmonella . SBS at 0.5%, phosphoric acid at 0.5%, and lactic acid at 0.25% individually and in combination with butyric acid at 0.075% and propionic acid at 0.05% reduced Salmonella loads below detectable limits within 2 h across all fats. The highest antibacterial efficacy was observed in Menhaden fish oil, with the Salmonella loads reduced to below detectable limits in less than 1 h. 8.3.3 Fatty acids Medium and long‐chain fatty acids are considered effective antimicrobial feed additives in animal feed. Research by Cochrane et al. demonstrated the antimicrobial effects of medium‐chain fatty acids against Salmonella in rendered protein meals used in the animal feed industry. In pet food research, medium‐chain fatty acids, namely, caproic (C6), caprylic (C8), and capric (C10) acids, were evaluated by Dhakal and Aldrich as coating on dry kibbles inoculated with Salmonella . C6, C8, and C10 at 0.5%–1% reduced Salmonella levels by >4.5 logs after 5 h of treatment. A combination of C6 + C8 (0.25%–0.5%) reduced Salmonella levels to below the detection limit in 4 h, whereas C6 + C10 (0.25%–1%) and C8 + C10 (0.25%–1%) did the same in 2–4 h and 1–5 h, respectively, displaying potential synergism. Although the MCFA was effective against Salmonella in pet food, MCFA‐coated dry dog kibbles did not enhance the palatability of the diets, and dogs preferred control diets over the MCFA‐coated diets. On the other hand, fish oils are rich sources of long‐chain omega‐3 fatty acids and are used as human dietary supplements and as pet food ingredients. Menhaden fish oil is a long‐chain omega‐3 fatty acid and a popular commercial pet food ingredient rich in polyunsaturated fatty acids (PUFA). PUFAs from fish sources have shown antibacterial activity against several pathogenic microorganisms, including E. coli , S. aureus , and Salmonella (Chitra Som & Radhakrishnan, ). Dhakal and Aldrich evaluated Menhaden fish oil in vitro and as a coating on dry pet food kibble against Salmonella at different storage temperatures of 25°C, 37°C, and 45°C. Salmonella levels in the fish oil were below detection limits by 2 h at all temperatures. On the kibble, the fish oil had higher antimicrobial activity after 12 h at 25°C and after 2 h at 45°C, thus increasing with temperature. Overall, higher antimicrobial activity of the fish oil was observed at 37°C and 45°C throughout the experiment, indicating that higher holding temperatures could enhance the antimicrobial efficacy of Menhaden fish oil. 8.3.4 Plant‐derived antimicrobials Plant‐derived antimicrobials (PDA), such as trans‐cinnamaldehyde, carvacrol, thymol, eugenol, and caprylic acid applied as vegetable oil or chitosan‐based antimicrobial spray on pet food kibble for reducing Salmonella were investigated by Chen et al. . All PDAs at 1% and 2% applied in vegetable oil or chitosan reduced Salmonella by at least 2 log CFU/g in 3 days compared with the control. Trans‐cinnamaldehyde at 2% was the most effective, with a 3–3.5 log CFU/g reduction of Salmonella during storage. Kiprotich et al. treated Salmonella ‐inoculated raw chicken breast meat with thyme oil at 0.5% (v/v) added into lemon juice and supplemented with Yucca schidigera extract, a natural emulsifier, at 23°C for 8 h. The 0.5% thyme oil treatment resulted in a 3.48 log reduction of Salmonella in 8 h. Boskovic et al. combined thyme oil treatment at 0.3% along with vacuum packaging on minced pork meat, a common ingredient of raw pet food stored under refrigeration at 3 ± 1°C for 15 days. About 1.7 log reduction of Salmonella counts was observed by 15 days. Similarly, Thanissery and Smith combined thyme oil and orange essential oil at 0.5% (v/v) each and achieved a 2.6 log reduction of Salmonella and a 3.6 log reduction of Campylobacter coli in chicken breast meat, another commonly used ingredient in raw pet food diet. A broader commercial application of PDA in pet foods could be its cost‐effectiveness. Additionally, as mentioned earlier, another limitation of some organic acids, acidulants, and fatty acids in pet foods is their impact on sensory attributes like taste, aroma, and flavor due to their low pH, high acid, and strong smell. Similarly, the strong smell and taste of PDA like essential oils are limitations in their application. In these cases, a “slow‐release mechanism” of these ingredients through encapsulation technologies could be an alternative. For instance, encapsulating organic acids with soluble and edible vegetable oil films allows for a slow release of the acid into the food product at a controlled rate, thereby minimizing organoleptic impact in terms of flavor and taste (Kiprotich & Aldrich, ). 8.3.5 Ozone treatment Ozone treatment employs a chemical method where contaminated food products are exposed to ozone in either an aqueous or gaseous phase. When ozone molecules create oxidative reactive species, they rupture the cell wall and damage the cell wall proteins, enzymes, and nucleic acids (Brodowska et al., ; Cano et al., ). The excess ozone rapidly decomposes to oxygen, thus leaving no toxic residues in food. The treatment with ozone requires no thermal energy, making it suitable for heat‐sensitive products, and the exclusion of heat generation saves the need for input energy (Cano et al., ; Kaavya et al., ). Ozone has been used to reduce S . Typhimurium, E. coli , and Listeria innocua contamination in fruits and vegetables such as cilantro, strawberries, romaine lettuce, and tomatoes (Alexopoulos et al., ; Chandran et al., ; Chen et al., ; Gibson et al., ). In a study by Chandran et al. , the effectiveness of a spray and batch wash ozone system (5 ppm) against Salmonella and L. monocytogenes on surfaces of carrots, sweet potatoes, and butter squash commonly used in RMBD was evaluated. The batch wash system resulted in up to 1.56 CFU/mL mean microbial reduction; however, this was not significantly different from the control. Meanwhile, with the spray wash system, freeze‐tempered produce showed a higher bacterial reduction with 5 ppm ozone than the control but was not different from room temperature produce. Ozone gas is also used to decontaminate Aspergillus flavus spores in extruded pet foods. Silva et al. reported a reduction of up to 98.3% of inoculated spores after 120 min exposure to 40 or 60 µmol/mol ozone and 84% reduction after 30 min at 40 µmol/mol ozone. Salmonella decontamination of raw chicken parts using ozonated water has only seen minimal reductions (Cano et al., ). However, ozone‐based treatment of pet food products has not been extensively studied, leaving a research opportunity. Physical interventions 8.1.1 High‐pressure processing Raw pet food manufacturers aim to maintain the ‘raw‐like’ attributes in their raw pet foods. Although traditional thermal pasteurization technology may negatively affect sensory characteristics, flavors, and nutritional contents of food, nonthermal processing technologies like HPP have attracted widespread attention from food industry practitioners. The HPP ensures microbial safety without the addition of preservatives and allows processed food to maintain the natural flavors and nutritional value of the original food material (Daryaei & Balasubramanian, ). In HPP technology, food is hermetically sealed in a flexible container under a high pressure of 100–600 MPa applied at room temperature using a liquid (typically water) as the pressure transfer medium, subjecting the interior and surface of the food to even pressure to achieve pasteurization (Balasubramaniam et al., ). Because the food is in packaged form and does not directly contact the processing devices, it prevents postprocessing contamination of food. This is critical in pet food manufacturing because postprocessing contamination is considered a key source of pathogen entry in pet foods. The HPP technology is commonly used to reduce pathogen levels and extend the shelf life of various human foods such as ready‐to‐eat foods, meat products, juices and beverages, seafood, and vegetable products. HPP‐related research, particularly targeting pet foods, is limited (Serra‐Castello et al., ). This could be partly because the food constituents used in pet foods are derived or diverted from human food manufacturing, and an intervention that is effective in human foods is, in general, also effective in pet foods. However, this technology is now increasingly adopted by pet food producers worldwide (Serra‐Castello et al., ). In the pet food industry, frozen raw and freeze‐dried pet foods and treats are commonly HPP treated. Hasty et al. studied the effectiveness of HPP (600 MPa for 8 min) in raw beef pet food inoculated with 7 logs of a Salmonella surrogate ( E. coli ATCC BAA 1427–31) and incubated at −23°C. After 24 h and 5 days of HPP treatment, log reductions of 4.9 logs and 6.2 logs were reported on selective agar. Similarly, Serra‐Castello et al. inoculated raw pet food consisting of chicken, vegetables, antioxidants, vitamins, and minerals with a three‐serovar cocktail of Salmonella spp. (Derby, Typhimurium, and Enteritidis). The frozen block of pet food was vacuum‐packed and subjected to HPP treatment. A maximum reduction of 9.33 log was observed at 750 MPa for 3.5 min at a pH of 6.09 ± 0.05. The same team investigated the effect of HPP against the same three serovars in raw pet food prepared from chicken, plant‐based ingredients, salmon, and spices according to the commercial recipe and stored at −20°C until use. A 9.08 log reduction of Salmonella was observed after both day 0 and day 14 from samples were stored at −18°C post‐HPP (750 MPa for 3.5 min) treatment (Serra‐Castello et al., ). Similarly, in another recent study by Lee et al. , three different raw pet food formulations with different levels of meat, organ meat, bone, seeds, fruits, and vegetables were inoculated with a 7‐log cocktail of 6 strains of Salmonella (3 of them were pet food isolates), treated with 586 MPa for 1 to 4 min and were either stored at 4°C or frozen at −10°C to −18°C for 21 days. Beef formulations were able to maintain the inactivation level above 5 logs after 586 MPa/2 min treatment and 1 day after storage and were maintained until the duration of the study. Not only does HPP act as point‐in‐time mitigation in pet foods, but this intervention is also known to extend the shelf life of pet foods. Neshovska et al. applied HPP (600 MPa for 3 min) to 210 raw pet food samples consisting of three different diet compositions, including animal and plant‐derived constituents, and evaluated shelf life and pathogen prevalence over time after incubating at refrigerated temperature. Samples were analyzed after 0, 15, and 30 days for Salmonella , E. coli , and aerobic bacteria. Although the aerobic plate count was above the acceptable range after 15 days, the E. coli and Salmonella were not detected until the study period ended, indicating that the HPP potentially extended the shelf life of raw pet foods. Different pathogens in raw pet foods tend to show different pressure resistance depending on species and strains. In general, Salmonella and L. monocytogenes strains displayed higher pressure resistance to HPP when compared with E. coli strains. The study also reported that the addition of lactic acid markedly enhanced the effectiveness of HPP against L. monocytogenes (Serra‐Castello et al., ) 8.1.2 Cold plasma treatment Cold plasma is a relatively new processing technology that uses an ionized gas consisting of neutral molecules, electrons, and positive and negative ions. It inactivates microbes via UV radiation and produces reactive chemical products of the cold plasma ionization process, including ozone, charged particles, reactive oxygen species (ROS), reactive nitrogen species, and free radicals (Hertrich et al., ). In‐package dielectric barrier discharge is one of the methods to generate cold plasma inside a confined food package, known as atmospheric cold plasma treatment. The ROS and charged particles possess tremendous potential to injure and inactivate several microorganisms like bacteria, fungi, and spores (Yadav & Roopesh, ). Cold plasma has been demonstrated to effectively reduce pathogens in several commodities, including seeds, fruits, vegetables, and pet treats (Hertrich et al., ; Yadav & Roopesh, ). Yadav and Roopesh studied the effect of cold plasma in freeze‐dried pet food treats inoculated with Salmonella . The surface inoculation of 8.2 log CFU/cm 2 Salmonella cocktail (Typhimurium and Senftenberg) was followed by modified atmospheric packaging and in‐package atmospheric cold plasma (APC) treatment. A 10 min APC followed by 7‐day storage at room temperature (21°C) successfully reduced Salmonella counts by 4.5 logs. However, to the knowledge of the authors, cold plasma technology is not being used in any commercial pet food industry as a pathogen mitigation method. 8.1.3 Pulsed light treatment Light‐emitting diodes (LEDs) are semiconductor diodes that use electroluminescence properties to produce light. High‐intensity light (in the ultraviolet wavelength range of 100–400 nm) pulses emitted from LEDs can reduce surface contamination in low‐moisture foods, including pet foods (Subedi & Roopesh, ). UV‐A light (320–400 nm) exposure causes bacterial cell death by generating ROS within the cell. Pulsed UV treatment is used for surface decontamination (Subedi & Roopesh, ) of fruits and vegetables, meat and poultry, and low‐moisture and high‐moisture foods. However, there has been limited consumer acceptance of the usage of pulsed UV LEDs. In a study by Subedi and Roopesh , the application of 395 nm LED treatment and 395 nm LED combined with vibration and mild hot air (50°C) caused a 1.2 and 2.26 log reduction in Salmonella spp. (Typhimurium and Senftenberg) levels in dry pet food pellets. In a separate study by Prasad et al. , dry pet food pellets with water activity of ca . 0.54 were inoculated with a five‐strain cocktail of Salmonella spp. (9 log CFU/mL) and air‐dried for 45–60 min. The pet food was treated at a 2 cm distance from the LED light source at spectra of 365 and 395 nm. The 395 nm LED treatment showed a significant Salmonella inactivation (1.76 log reduction) compared with 365 nm (0.79 log reduction). Similar to cold plasma, to our knowledge, pulse light treatment has not been used in any commercial pet food industry as a pathogen reduction step. 8.1.4 Irradiation Food irradiation is the process whereby foodstuffs are exposed to a source of ionizing radiation. The irradiation technology was approved in pet foods, pet treats (including pig ears), and chews in 2001 . Pet foods are exposed to sources of ionizing radiation, which can cause chemical changes. The approval from the FDA was obtained after a petition was filed to control the risk of Salmonella in pet foods, which was identified as a potential threat to pet owners, especially children. The radiation sources, depending on dosage, either destroy or render pathogens incapable of reproduction. There are very limited studies available on the use of irradiation to mitigate pathogens in pet food. Unfortunately, irradiation in pet foods is not widely appreciated by pet owners, especially after the 2007–2008 reports from Australia, where as many as 87 cats developed neurological symptoms and were suspected to be due to the feeding of irradiated dry pet food. The vitamin A depletion due to gamma irradiation was suggested to be a possible cause of neurologic symptoms in the affected cats (Child et al., ). However, in a study by Zhu et al. , the physiological effect of feeding irradiated pet foods to pet rats was studied and it reported that pet foods irradiated at 10, 15, and 25 kGy did not cause any abnormal physiological parameters as measured in terms of the general situation, food intake, food utilization rate, hematological parameters, biochemical parameters, viscera weight, histopathological reports, height, tail length, body temperature, heart rate, blood pressure, etc. when compared with the control groups. In one of the older studies published by Ley et al. , raw frozen (−15°C) meat (horsemeat, kangaroo, and veal) intended for use as pet food was gamma irradiated (0.6 Mrad) to reduce Salmonella spp. (Typhimurium, Senftenberg, Oranienburg, Anatum, Good, and Minnesota) levels by 5 logs. The study also reported that the postirradiation storage did not lead to the recovery of the irradiated Salmonella . In a study by Rana Raj , semimoist pet treats with 10%, 15%, and 25% moisture were treated with gamma irradiation at 2.0, 3.0, 4.0, 6.0, and 8.0 kGy and incubated at room temperature until 180 days. Samples were analyzed at 7, 15, 30, 45, 60, 75, 90, 120, 150, and 180 days. Nondetectable aerobic counts were reported in treats with 10% moisture treated with gamma irradiation doses of 6 and 8 kGy. When it comes to Salmonella , even the control treats were negative. Therefore, the irradiation was not indicative of Salmonella reduction in this study. Treatment with 4, 6, and 8 Gy of gamma radiation led to nondetectable coliform counts for treats with 10% and 15% moisture; however, in the past 30 days of storage, an increase in coliform counts was detected. In a separate study by Sethukali et al. , commercial semimoist pet foods were exposed to 2.5, 5.0, and 10.0 kGy of electron beam and X‐ray after inoculation with E. coli O157:H7 and S . Typhimurium. The microbiological evaluation conducted every 20 days showed that the pathogen reduction was better at a higher dosage (10 kGy). However, it also accelerated lipid oxidation and protein degradation compared with the lower dosage (5 kGy) of electron beams and X‐rays. In a recent study by Kakatkar et al. , a series of experiments were conducted wherein pet food kibble and powders composed of wheat, rice, and the fish by‐product from Pangasium bocourti were subjected to gamma irradiation at a dosage of 2.5 and 5 kGy. The investigation revealed a significant extension in shelf life, with 2.5 kGy treated samples exhibiting prolonged viability for 65 days compared with the control (28–35 days). When a higher dosage of 5 kGy of gamma radiation was administered, the total bacterial load remained below the detection limit (< 10 CFU/g) throughout the entirety of the 65‐day observation period. Conversely, in the untreated control samples, whereas Salmonella was notably absent, a measurable presence of coliform bacteria (≤ 20 CFU/g) and Staphylococcus aureus (ranging between 2.13 and 2.52 CFU/g) was detected. However, upon exposure to 2.5 kGy and 5 kGy of gamma irradiation, the pet food samples exhibited an absence of foodborne pathogens, including Salmonella . This underscores the efficacy of gamma irradiation as a means of eliminating potential Salmonella contamination in pet food products, thereby enhancing their safety and extending their shelf life. High‐pressure processing Raw pet food manufacturers aim to maintain the ‘raw‐like’ attributes in their raw pet foods. Although traditional thermal pasteurization technology may negatively affect sensory characteristics, flavors, and nutritional contents of food, nonthermal processing technologies like HPP have attracted widespread attention from food industry practitioners. The HPP ensures microbial safety without the addition of preservatives and allows processed food to maintain the natural flavors and nutritional value of the original food material (Daryaei & Balasubramanian, ). In HPP technology, food is hermetically sealed in a flexible container under a high pressure of 100–600 MPa applied at room temperature using a liquid (typically water) as the pressure transfer medium, subjecting the interior and surface of the food to even pressure to achieve pasteurization (Balasubramaniam et al., ). Because the food is in packaged form and does not directly contact the processing devices, it prevents postprocessing contamination of food. This is critical in pet food manufacturing because postprocessing contamination is considered a key source of pathogen entry in pet foods. The HPP technology is commonly used to reduce pathogen levels and extend the shelf life of various human foods such as ready‐to‐eat foods, meat products, juices and beverages, seafood, and vegetable products. HPP‐related research, particularly targeting pet foods, is limited (Serra‐Castello et al., ). This could be partly because the food constituents used in pet foods are derived or diverted from human food manufacturing, and an intervention that is effective in human foods is, in general, also effective in pet foods. However, this technology is now increasingly adopted by pet food producers worldwide (Serra‐Castello et al., ). In the pet food industry, frozen raw and freeze‐dried pet foods and treats are commonly HPP treated. Hasty et al. studied the effectiveness of HPP (600 MPa for 8 min) in raw beef pet food inoculated with 7 logs of a Salmonella surrogate ( E. coli ATCC BAA 1427–31) and incubated at −23°C. After 24 h and 5 days of HPP treatment, log reductions of 4.9 logs and 6.2 logs were reported on selective agar. Similarly, Serra‐Castello et al. inoculated raw pet food consisting of chicken, vegetables, antioxidants, vitamins, and minerals with a three‐serovar cocktail of Salmonella spp. (Derby, Typhimurium, and Enteritidis). The frozen block of pet food was vacuum‐packed and subjected to HPP treatment. A maximum reduction of 9.33 log was observed at 750 MPa for 3.5 min at a pH of 6.09 ± 0.05. The same team investigated the effect of HPP against the same three serovars in raw pet food prepared from chicken, plant‐based ingredients, salmon, and spices according to the commercial recipe and stored at −20°C until use. A 9.08 log reduction of Salmonella was observed after both day 0 and day 14 from samples were stored at −18°C post‐HPP (750 MPa for 3.5 min) treatment (Serra‐Castello et al., ). Similarly, in another recent study by Lee et al. , three different raw pet food formulations with different levels of meat, organ meat, bone, seeds, fruits, and vegetables were inoculated with a 7‐log cocktail of 6 strains of Salmonella (3 of them were pet food isolates), treated with 586 MPa for 1 to 4 min and were either stored at 4°C or frozen at −10°C to −18°C for 21 days. Beef formulations were able to maintain the inactivation level above 5 logs after 586 MPa/2 min treatment and 1 day after storage and were maintained until the duration of the study. Not only does HPP act as point‐in‐time mitigation in pet foods, but this intervention is also known to extend the shelf life of pet foods. Neshovska et al. applied HPP (600 MPa for 3 min) to 210 raw pet food samples consisting of three different diet compositions, including animal and plant‐derived constituents, and evaluated shelf life and pathogen prevalence over time after incubating at refrigerated temperature. Samples were analyzed after 0, 15, and 30 days for Salmonella , E. coli , and aerobic bacteria. Although the aerobic plate count was above the acceptable range after 15 days, the E. coli and Salmonella were not detected until the study period ended, indicating that the HPP potentially extended the shelf life of raw pet foods. Different pathogens in raw pet foods tend to show different pressure resistance depending on species and strains. In general, Salmonella and L. monocytogenes strains displayed higher pressure resistance to HPP when compared with E. coli strains. The study also reported that the addition of lactic acid markedly enhanced the effectiveness of HPP against L. monocytogenes (Serra‐Castello et al., ) Cold plasma treatment Cold plasma is a relatively new processing technology that uses an ionized gas consisting of neutral molecules, electrons, and positive and negative ions. It inactivates microbes via UV radiation and produces reactive chemical products of the cold plasma ionization process, including ozone, charged particles, reactive oxygen species (ROS), reactive nitrogen species, and free radicals (Hertrich et al., ). In‐package dielectric barrier discharge is one of the methods to generate cold plasma inside a confined food package, known as atmospheric cold plasma treatment. The ROS and charged particles possess tremendous potential to injure and inactivate several microorganisms like bacteria, fungi, and spores (Yadav & Roopesh, ). Cold plasma has been demonstrated to effectively reduce pathogens in several commodities, including seeds, fruits, vegetables, and pet treats (Hertrich et al., ; Yadav & Roopesh, ). Yadav and Roopesh studied the effect of cold plasma in freeze‐dried pet food treats inoculated with Salmonella . The surface inoculation of 8.2 log CFU/cm 2 Salmonella cocktail (Typhimurium and Senftenberg) was followed by modified atmospheric packaging and in‐package atmospheric cold plasma (APC) treatment. A 10 min APC followed by 7‐day storage at room temperature (21°C) successfully reduced Salmonella counts by 4.5 logs. However, to the knowledge of the authors, cold plasma technology is not being used in any commercial pet food industry as a pathogen mitigation method. Pulsed light treatment Light‐emitting diodes (LEDs) are semiconductor diodes that use electroluminescence properties to produce light. High‐intensity light (in the ultraviolet wavelength range of 100–400 nm) pulses emitted from LEDs can reduce surface contamination in low‐moisture foods, including pet foods (Subedi & Roopesh, ). UV‐A light (320–400 nm) exposure causes bacterial cell death by generating ROS within the cell. Pulsed UV treatment is used for surface decontamination (Subedi & Roopesh, ) of fruits and vegetables, meat and poultry, and low‐moisture and high‐moisture foods. However, there has been limited consumer acceptance of the usage of pulsed UV LEDs. In a study by Subedi and Roopesh , the application of 395 nm LED treatment and 395 nm LED combined with vibration and mild hot air (50°C) caused a 1.2 and 2.26 log reduction in Salmonella spp. (Typhimurium and Senftenberg) levels in dry pet food pellets. In a separate study by Prasad et al. , dry pet food pellets with water activity of ca . 0.54 were inoculated with a five‐strain cocktail of Salmonella spp. (9 log CFU/mL) and air‐dried for 45–60 min. The pet food was treated at a 2 cm distance from the LED light source at spectra of 365 and 395 nm. The 395 nm LED treatment showed a significant Salmonella inactivation (1.76 log reduction) compared with 365 nm (0.79 log reduction). Similar to cold plasma, to our knowledge, pulse light treatment has not been used in any commercial pet food industry as a pathogen reduction step. Irradiation Food irradiation is the process whereby foodstuffs are exposed to a source of ionizing radiation. The irradiation technology was approved in pet foods, pet treats (including pig ears), and chews in 2001 . Pet foods are exposed to sources of ionizing radiation, which can cause chemical changes. The approval from the FDA was obtained after a petition was filed to control the risk of Salmonella in pet foods, which was identified as a potential threat to pet owners, especially children. The radiation sources, depending on dosage, either destroy or render pathogens incapable of reproduction. There are very limited studies available on the use of irradiation to mitigate pathogens in pet food. Unfortunately, irradiation in pet foods is not widely appreciated by pet owners, especially after the 2007–2008 reports from Australia, where as many as 87 cats developed neurological symptoms and were suspected to be due to the feeding of irradiated dry pet food. The vitamin A depletion due to gamma irradiation was suggested to be a possible cause of neurologic symptoms in the affected cats (Child et al., ). However, in a study by Zhu et al. , the physiological effect of feeding irradiated pet foods to pet rats was studied and it reported that pet foods irradiated at 10, 15, and 25 kGy did not cause any abnormal physiological parameters as measured in terms of the general situation, food intake, food utilization rate, hematological parameters, biochemical parameters, viscera weight, histopathological reports, height, tail length, body temperature, heart rate, blood pressure, etc. when compared with the control groups. In one of the older studies published by Ley et al. , raw frozen (−15°C) meat (horsemeat, kangaroo, and veal) intended for use as pet food was gamma irradiated (0.6 Mrad) to reduce Salmonella spp. (Typhimurium, Senftenberg, Oranienburg, Anatum, Good, and Minnesota) levels by 5 logs. The study also reported that the postirradiation storage did not lead to the recovery of the irradiated Salmonella . In a study by Rana Raj , semimoist pet treats with 10%, 15%, and 25% moisture were treated with gamma irradiation at 2.0, 3.0, 4.0, 6.0, and 8.0 kGy and incubated at room temperature until 180 days. Samples were analyzed at 7, 15, 30, 45, 60, 75, 90, 120, 150, and 180 days. Nondetectable aerobic counts were reported in treats with 10% moisture treated with gamma irradiation doses of 6 and 8 kGy. When it comes to Salmonella , even the control treats were negative. Therefore, the irradiation was not indicative of Salmonella reduction in this study. Treatment with 4, 6, and 8 Gy of gamma radiation led to nondetectable coliform counts for treats with 10% and 15% moisture; however, in the past 30 days of storage, an increase in coliform counts was detected. In a separate study by Sethukali et al. , commercial semimoist pet foods were exposed to 2.5, 5.0, and 10.0 kGy of electron beam and X‐ray after inoculation with E. coli O157:H7 and S . Typhimurium. The microbiological evaluation conducted every 20 days showed that the pathogen reduction was better at a higher dosage (10 kGy). However, it also accelerated lipid oxidation and protein degradation compared with the lower dosage (5 kGy) of electron beams and X‐rays. In a recent study by Kakatkar et al. , a series of experiments were conducted wherein pet food kibble and powders composed of wheat, rice, and the fish by‐product from Pangasium bocourti were subjected to gamma irradiation at a dosage of 2.5 and 5 kGy. The investigation revealed a significant extension in shelf life, with 2.5 kGy treated samples exhibiting prolonged viability for 65 days compared with the control (28–35 days). When a higher dosage of 5 kGy of gamma radiation was administered, the total bacterial load remained below the detection limit (< 10 CFU/g) throughout the entirety of the 65‐day observation period. Conversely, in the untreated control samples, whereas Salmonella was notably absent, a measurable presence of coliform bacteria (≤ 20 CFU/g) and Staphylococcus aureus (ranging between 2.13 and 2.52 CFU/g) was detected. However, upon exposure to 2.5 kGy and 5 kGy of gamma irradiation, the pet food samples exhibited an absence of foodborne pathogens, including Salmonella . This underscores the efficacy of gamma irradiation as a means of eliminating potential Salmonella contamination in pet food products, thereby enhancing their safety and extending their shelf life. Biological interventions 8.2.1 Bacteriophage Due to their specificity, environment‐friendly, and natural abundance, bacteriophages are becoming popular against pathogens in various human foods such as raw meat (Sharma et al., ), fresh produce (Lopez‐Cuevas et al., ), dairy (Phongtang et al., ), seafood (Xu et al., ), etc. However, limited research is available on the use of bacteriophages in pet food. The use of bacteriophages in dry foods in general and dry pet food specifically is even more limited, which could be due to the limited growth of bacteria in dry food, making it difficult to locate the phages, and secondly due to the restricted mobilization phage in dry foods. Heyse et al. explored the effectiveness of bacteriophages in mitigating Salmonella contamination in dried pet food. Pet food samples were inoculated with a cocktail mixture of S . Enteritidis, Montevideo, Senftenberg, and Typhimurium at ca . 6 logs, followed by thorough mixing to ensure uniform distribution of Salmonella . A surface spray of phage preparation to achieve final concentrations of 5, 6, and 7 log PFU/g followed by incubation at room temperature for 1 h led to the Salmonella reduction of 0.8, 1.4, and 2.0 log MPN/g, respectively. Similarly, Soffer et al. evaluated the efficacy of a cocktail bacteriophage consisting of 6 lytic monophages against a Salmonella cocktail of Enteritidis, Heidelberg, and Typhi in raw pet foods. Locally purchased raw pet food ingredients such as chicken, turkey, tuna, cantaloupe, and lettuce inoculated with Salmonella ( ca . 1500 CFU/g on chickens; ca . 1,250 CFU/g on turkey trim, ca . 2000 CFU/g on tuna/cantaloupe; and ca . 500 CFU/g on lettuce) followed by a 60‐min attachment time and bacteriophage application. The result showed up to 88%, 68%, 80, 92%, and 89% reduction in chicken, turkey, lettuce, tuna, and cantaloupe with 9×10 6 PFU/g of bacteriophage. In the case of turkey trim, ∼2×10 7 PFU/g of bacteriophage was able to cause an 86% reduction in Salmonella . The authors also evaluated the effect of bacteriophage‐treated dry pet food kibbles on pets by feeding it to cats and dogs and reported no deleterious side effects in pets. Bacteriophages have been commercially used in fresh pet food, for example, by a company Furchild Pet Nutrition, with a claim to have gained success. Bacteriophage Due to their specificity, environment‐friendly, and natural abundance, bacteriophages are becoming popular against pathogens in various human foods such as raw meat (Sharma et al., ), fresh produce (Lopez‐Cuevas et al., ), dairy (Phongtang et al., ), seafood (Xu et al., ), etc. However, limited research is available on the use of bacteriophages in pet food. The use of bacteriophages in dry foods in general and dry pet food specifically is even more limited, which could be due to the limited growth of bacteria in dry food, making it difficult to locate the phages, and secondly due to the restricted mobilization phage in dry foods. Heyse et al. explored the effectiveness of bacteriophages in mitigating Salmonella contamination in dried pet food. Pet food samples were inoculated with a cocktail mixture of S . Enteritidis, Montevideo, Senftenberg, and Typhimurium at ca . 6 logs, followed by thorough mixing to ensure uniform distribution of Salmonella . A surface spray of phage preparation to achieve final concentrations of 5, 6, and 7 log PFU/g followed by incubation at room temperature for 1 h led to the Salmonella reduction of 0.8, 1.4, and 2.0 log MPN/g, respectively. Similarly, Soffer et al. evaluated the efficacy of a cocktail bacteriophage consisting of 6 lytic monophages against a Salmonella cocktail of Enteritidis, Heidelberg, and Typhi in raw pet foods. Locally purchased raw pet food ingredients such as chicken, turkey, tuna, cantaloupe, and lettuce inoculated with Salmonella ( ca . 1500 CFU/g on chickens; ca . 1,250 CFU/g on turkey trim, ca . 2000 CFU/g on tuna/cantaloupe; and ca . 500 CFU/g on lettuce) followed by a 60‐min attachment time and bacteriophage application. The result showed up to 88%, 68%, 80, 92%, and 89% reduction in chicken, turkey, lettuce, tuna, and cantaloupe with 9×10 6 PFU/g of bacteriophage. In the case of turkey trim, ∼2×10 7 PFU/g of bacteriophage was able to cause an 86% reduction in Salmonella . The authors also evaluated the effect of bacteriophage‐treated dry pet food kibbles on pets by feeding it to cats and dogs and reported no deleterious side effects in pets. Bacteriophages have been commercially used in fresh pet food, for example, by a company Furchild Pet Nutrition, with a claim to have gained success. Chemical interventions Physical interventions, which are point‐in‐time mitigation strategies, may lack carry‐over effects to prevent postprocessing contamination. Meanwhile, the application of chemical additives and antimicrobials usually has the potential to act against pathogens for longer durations (Huss et al., 2017). Therefore, different organic acids, acidulants, medium‐chain fatty acids, and plant‐derived antimicrobial additives are often applied in pet foods as pathogen mitigation interventions. 8.3.1 Liquid smoke Liquid smoke is a naturally derived flavor component and preservative used in human and pet foods, with known antimicrobial properties (Deliephan et al., ; Lingbeck et al., ). Liquid smoke is recognized as a Generally Recognized as Safe (GRAS) additive for human consumption by the U.S. FDA. In the food industry, liquid smoke fractions are used as flavoring agents, browning colorants, antioxidants, texture enhancers, and antimicrobial agents (Deliephan et al., ). To our knowledge, there is no published study evaluating liquid smoke as an antimicrobial against Salmonella in pet food. However, liquid smoke has been used as a flavoring ingredient in a wide range of pet food treats manufactured by major pet food companies, including Blue Buffalo, Purina, Smokehouse Pet Products, and Nutrish. Liquid smoke and its fractions containing phenols, carbonyls, and organic compounds have been found to be effective against pathogenic bacteria like L. monocytogenes and Staphylococcus aureus in meat and fish products (Lingbeck et al., ; Sunen et al., ). Though liquid smoke has not been commercially studied in mitigating Salmonella in pet food, it has shown antimicrobial activity against fungi in pet food and against Salmonella in other food matrices. Therefore, in addition to their use as flavoring agents, the potential use of liquid smoke as an antimicrobial agent in pet foods like dry kibble, semimoist treats, and RMBD needs to be determined. A study by Deliephan et al. evaluated the antifungal effects of liquid smoke fractions against Aspergillus flavus in semimoist pet food. Researchers have evaluated liquid smoke fractions in broth assays against Salmonella and have proved their inhibitory activity. Kim et al. reported the minimum inhibitory concentration (MIC) of liquid smoke from rice hull smoke condensate to be 0.822% against S . Typhimurium. Another study by Van Loo et al. evaluated four commercial smoke extracts for which the MICs ranged from 0.5%‐12% against S . Typhimurium. Although there is very limited published literature on the sensory palatability of liquid smoke by pets, it can be ascertained that liquid smoke has good application potential in pet foods due to the commercial availability of various kinds of smoke‐flavored pet treats by major pet food companies. 8.3.2 Organic acids and acidulants There are several organic acids and acidulants commonly used as processing aids or as ingredients in human and animal foods, including lactic acid, citric acid, propionic acid, phosphoric acid, acetic acid (and their salts), and sodium bisulfate (SBS). Most of the organic acids and acidulants are considered GRAS) additives by the U.S. FDA and hence do not have a daily maximum acceptable intake for humans or animals, which increases their applicability in foods. However, their dosage is limited by their negative impact on organoleptic and color attributes of food and meat products in many cases (Kiprotich & Aldrich, ). Nontraditional chicken products, such as hearts and livers, are commonly used in the manufacturing of pet foods and are becoming increasingly popular in RMBD (Cancio et al., ). Several studies have looked at Salmonella decontamination strategies for these products. Cancio et al. evaluate the use of peroxyacetic acid (PAA), buffered vinegar, and cultured dextrose fermentate to reduce Salmonella on artificially inoculated raw chicken livers intended to be used in pet foods. After immersion, there was a significant Salmonella reduction with all treatments, including the water control. More recently, Nakimera et al. evaluated the efficacy of a blend of citric acid and hydrochloric acid (CP), PAA, and sulfuric acid against Salmonella and mesophilic aerobic plate counts (APC) on chicken hearts and livers commonly used in pet food. All antimicrobials reduced Salmonella counts by more than one log, in contrast to the water control. The results of these studies demonstrate that Salmonella can be mitigated in raw poultry products intended for pet food production using processing aids that are already common in the meat industry. In the animal feed industry, chemical additives are often derived from blends of organic acids, such as 3‐Hydroxy‐3‐methylbutyrate (HMB), an organic acid available commercially in both free acid (HMBFA) and calcium salt (CaHMB) forms for use in animal feed. HMB functions as a metabolite of the essential amino acid leucine for animals and is recognized as GRAS by the U.S. FDA. Additionally, due to its organic acid properties, it also imparts antimicrobial effects when used in animal food and feed. A study by Huss et al. evaluated HMBFA or CaHMB as a coating on pet food kibble against Salmonella . 1.5% HMBFA reduced Salmonella counts by ∼ 4.9 logs in 1 day, whereas 1.5% CaHMB decreased Salmonella by ca .7.1 in 7 days. All HMBFA and CaHMB treatments reduced Salmonella counts to undetectable levels in 14 days. Deliephan et al. evaluated two commercial organic acid mixtures containing hydroxy‐4‐(methylthio) butanoic acid (HMTBa) at 2% and 1%, respectively, as a coating on kibble inoculated with Salmonella or E. coli O157:H7 (STEC). Salmonella counts were reduced by ca . 3 logs after 12 h and up to 4.6 logs after 24 h. STEC counts were also reduced by ca . 2 and 3 logs after 12 h and 24 h, respectively. Similarly, O'Bryan et al. evaluated a proprietary mixture of an organic acid blend consisting of 5%–25% nonanoic acid, 1%–25% butyric acid, and 1%–50% trans‐2‐hexenal on meat and bone meal (commonly used as a pet food ingredient) at 0, 1, 1.5, or 2 mL/kg of feed. Microbial analysis over time resulted in about 1 log reduction of Salmonella by 24 h and ca . 2 log reduction in 14 days. SBS is another GRAS acidulant approved for use as an additive in human and animal foods by the U.S. FDA. Due to its hygroscopicity and desiccant effect, SBS is found to be effective in killing pathogens such as Salmonella and Campylobacter (Dhakal & Aldrich, ; Line, ). SBS is commonly used in animal diets for the acidification of feline urine and for the preservation of soft‐moist treats and liquid digests. Dhakal et al. evaluated SBS, lactic acid, phosphoric acid, and combinations of butyric and propionic acids in rendered chicken fat (used to coat dry pet food kibbles) inoculated with Salmonella . SBS or lactic acid at 0.5% individually or a combination of SBS with propionic and butyric acid reduced Salmonella loads by >5.5 logs within 15 h in the chicken fat without negatively altering the shelf life of rendered fat (Dhakal et al., 2019). Dhakal and Aldrich evaluated the acidulants SBS, phosphoric acid, and lactic acid, individually and in combination with organic acids butyric and propionic acid in different fat types, namely, chicken fat, canola oil, Menhaden fish oil, lard, and tallow that are intended to coat dry pet food kibbles. The treated fats were inoculated with approximately 8 logs of Salmonella . SBS at 0.5%, phosphoric acid at 0.5%, and lactic acid at 0.25% individually and in combination with butyric acid at 0.075% and propionic acid at 0.05% reduced Salmonella loads below detectable limits within 2 h across all fats. The highest antibacterial efficacy was observed in Menhaden fish oil, with the Salmonella loads reduced to below detectable limits in less than 1 h. 8.3.3 Fatty acids Medium and long‐chain fatty acids are considered effective antimicrobial feed additives in animal feed. Research by Cochrane et al. demonstrated the antimicrobial effects of medium‐chain fatty acids against Salmonella in rendered protein meals used in the animal feed industry. In pet food research, medium‐chain fatty acids, namely, caproic (C6), caprylic (C8), and capric (C10) acids, were evaluated by Dhakal and Aldrich as coating on dry kibbles inoculated with Salmonella . C6, C8, and C10 at 0.5%–1% reduced Salmonella levels by >4.5 logs after 5 h of treatment. A combination of C6 + C8 (0.25%–0.5%) reduced Salmonella levels to below the detection limit in 4 h, whereas C6 + C10 (0.25%–1%) and C8 + C10 (0.25%–1%) did the same in 2–4 h and 1–5 h, respectively, displaying potential synergism. Although the MCFA was effective against Salmonella in pet food, MCFA‐coated dry dog kibbles did not enhance the palatability of the diets, and dogs preferred control diets over the MCFA‐coated diets. On the other hand, fish oils are rich sources of long‐chain omega‐3 fatty acids and are used as human dietary supplements and as pet food ingredients. Menhaden fish oil is a long‐chain omega‐3 fatty acid and a popular commercial pet food ingredient rich in polyunsaturated fatty acids (PUFA). PUFAs from fish sources have shown antibacterial activity against several pathogenic microorganisms, including E. coli , S. aureus , and Salmonella (Chitra Som & Radhakrishnan, ). Dhakal and Aldrich evaluated Menhaden fish oil in vitro and as a coating on dry pet food kibble against Salmonella at different storage temperatures of 25°C, 37°C, and 45°C. Salmonella levels in the fish oil were below detection limits by 2 h at all temperatures. On the kibble, the fish oil had higher antimicrobial activity after 12 h at 25°C and after 2 h at 45°C, thus increasing with temperature. Overall, higher antimicrobial activity of the fish oil was observed at 37°C and 45°C throughout the experiment, indicating that higher holding temperatures could enhance the antimicrobial efficacy of Menhaden fish oil. 8.3.4 Plant‐derived antimicrobials Plant‐derived antimicrobials (PDA), such as trans‐cinnamaldehyde, carvacrol, thymol, eugenol, and caprylic acid applied as vegetable oil or chitosan‐based antimicrobial spray on pet food kibble for reducing Salmonella were investigated by Chen et al. . All PDAs at 1% and 2% applied in vegetable oil or chitosan reduced Salmonella by at least 2 log CFU/g in 3 days compared with the control. Trans‐cinnamaldehyde at 2% was the most effective, with a 3–3.5 log CFU/g reduction of Salmonella during storage. Kiprotich et al. treated Salmonella ‐inoculated raw chicken breast meat with thyme oil at 0.5% (v/v) added into lemon juice and supplemented with Yucca schidigera extract, a natural emulsifier, at 23°C for 8 h. The 0.5% thyme oil treatment resulted in a 3.48 log reduction of Salmonella in 8 h. Boskovic et al. combined thyme oil treatment at 0.3% along with vacuum packaging on minced pork meat, a common ingredient of raw pet food stored under refrigeration at 3 ± 1°C for 15 days. About 1.7 log reduction of Salmonella counts was observed by 15 days. Similarly, Thanissery and Smith combined thyme oil and orange essential oil at 0.5% (v/v) each and achieved a 2.6 log reduction of Salmonella and a 3.6 log reduction of Campylobacter coli in chicken breast meat, another commonly used ingredient in raw pet food diet. A broader commercial application of PDA in pet foods could be its cost‐effectiveness. Additionally, as mentioned earlier, another limitation of some organic acids, acidulants, and fatty acids in pet foods is their impact on sensory attributes like taste, aroma, and flavor due to their low pH, high acid, and strong smell. Similarly, the strong smell and taste of PDA like essential oils are limitations in their application. In these cases, a “slow‐release mechanism” of these ingredients through encapsulation technologies could be an alternative. For instance, encapsulating organic acids with soluble and edible vegetable oil films allows for a slow release of the acid into the food product at a controlled rate, thereby minimizing organoleptic impact in terms of flavor and taste (Kiprotich & Aldrich, ). 8.3.5 Ozone treatment Ozone treatment employs a chemical method where contaminated food products are exposed to ozone in either an aqueous or gaseous phase. When ozone molecules create oxidative reactive species, they rupture the cell wall and damage the cell wall proteins, enzymes, and nucleic acids (Brodowska et al., ; Cano et al., ). The excess ozone rapidly decomposes to oxygen, thus leaving no toxic residues in food. The treatment with ozone requires no thermal energy, making it suitable for heat‐sensitive products, and the exclusion of heat generation saves the need for input energy (Cano et al., ; Kaavya et al., ). Ozone has been used to reduce S . Typhimurium, E. coli , and Listeria innocua contamination in fruits and vegetables such as cilantro, strawberries, romaine lettuce, and tomatoes (Alexopoulos et al., ; Chandran et al., ; Chen et al., ; Gibson et al., ). In a study by Chandran et al. , the effectiveness of a spray and batch wash ozone system (5 ppm) against Salmonella and L. monocytogenes on surfaces of carrots, sweet potatoes, and butter squash commonly used in RMBD was evaluated. The batch wash system resulted in up to 1.56 CFU/mL mean microbial reduction; however, this was not significantly different from the control. Meanwhile, with the spray wash system, freeze‐tempered produce showed a higher bacterial reduction with 5 ppm ozone than the control but was not different from room temperature produce. Ozone gas is also used to decontaminate Aspergillus flavus spores in extruded pet foods. Silva et al. reported a reduction of up to 98.3% of inoculated spores after 120 min exposure to 40 or 60 µmol/mol ozone and 84% reduction after 30 min at 40 µmol/mol ozone. Salmonella decontamination of raw chicken parts using ozonated water has only seen minimal reductions (Cano et al., ). However, ozone‐based treatment of pet food products has not been extensively studied, leaving a research opportunity. Liquid smoke Liquid smoke is a naturally derived flavor component and preservative used in human and pet foods, with known antimicrobial properties (Deliephan et al., ; Lingbeck et al., ). Liquid smoke is recognized as a Generally Recognized as Safe (GRAS) additive for human consumption by the U.S. FDA. In the food industry, liquid smoke fractions are used as flavoring agents, browning colorants, antioxidants, texture enhancers, and antimicrobial agents (Deliephan et al., ). To our knowledge, there is no published study evaluating liquid smoke as an antimicrobial against Salmonella in pet food. However, liquid smoke has been used as a flavoring ingredient in a wide range of pet food treats manufactured by major pet food companies, including Blue Buffalo, Purina, Smokehouse Pet Products, and Nutrish. Liquid smoke and its fractions containing phenols, carbonyls, and organic compounds have been found to be effective against pathogenic bacteria like L. monocytogenes and Staphylococcus aureus in meat and fish products (Lingbeck et al., ; Sunen et al., ). Though liquid smoke has not been commercially studied in mitigating Salmonella in pet food, it has shown antimicrobial activity against fungi in pet food and against Salmonella in other food matrices. Therefore, in addition to their use as flavoring agents, the potential use of liquid smoke as an antimicrobial agent in pet foods like dry kibble, semimoist treats, and RMBD needs to be determined. A study by Deliephan et al. evaluated the antifungal effects of liquid smoke fractions against Aspergillus flavus in semimoist pet food. Researchers have evaluated liquid smoke fractions in broth assays against Salmonella and have proved their inhibitory activity. Kim et al. reported the minimum inhibitory concentration (MIC) of liquid smoke from rice hull smoke condensate to be 0.822% against S . Typhimurium. Another study by Van Loo et al. evaluated four commercial smoke extracts for which the MICs ranged from 0.5%‐12% against S . Typhimurium. Although there is very limited published literature on the sensory palatability of liquid smoke by pets, it can be ascertained that liquid smoke has good application potential in pet foods due to the commercial availability of various kinds of smoke‐flavored pet treats by major pet food companies. Organic acids and acidulants There are several organic acids and acidulants commonly used as processing aids or as ingredients in human and animal foods, including lactic acid, citric acid, propionic acid, phosphoric acid, acetic acid (and their salts), and sodium bisulfate (SBS). Most of the organic acids and acidulants are considered GRAS) additives by the U.S. FDA and hence do not have a daily maximum acceptable intake for humans or animals, which increases their applicability in foods. However, their dosage is limited by their negative impact on organoleptic and color attributes of food and meat products in many cases (Kiprotich & Aldrich, ). Nontraditional chicken products, such as hearts and livers, are commonly used in the manufacturing of pet foods and are becoming increasingly popular in RMBD (Cancio et al., ). Several studies have looked at Salmonella decontamination strategies for these products. Cancio et al. evaluate the use of peroxyacetic acid (PAA), buffered vinegar, and cultured dextrose fermentate to reduce Salmonella on artificially inoculated raw chicken livers intended to be used in pet foods. After immersion, there was a significant Salmonella reduction with all treatments, including the water control. More recently, Nakimera et al. evaluated the efficacy of a blend of citric acid and hydrochloric acid (CP), PAA, and sulfuric acid against Salmonella and mesophilic aerobic plate counts (APC) on chicken hearts and livers commonly used in pet food. All antimicrobials reduced Salmonella counts by more than one log, in contrast to the water control. The results of these studies demonstrate that Salmonella can be mitigated in raw poultry products intended for pet food production using processing aids that are already common in the meat industry. In the animal feed industry, chemical additives are often derived from blends of organic acids, such as 3‐Hydroxy‐3‐methylbutyrate (HMB), an organic acid available commercially in both free acid (HMBFA) and calcium salt (CaHMB) forms for use in animal feed. HMB functions as a metabolite of the essential amino acid leucine for animals and is recognized as GRAS by the U.S. FDA. Additionally, due to its organic acid properties, it also imparts antimicrobial effects when used in animal food and feed. A study by Huss et al. evaluated HMBFA or CaHMB as a coating on pet food kibble against Salmonella . 1.5% HMBFA reduced Salmonella counts by ∼ 4.9 logs in 1 day, whereas 1.5% CaHMB decreased Salmonella by ca .7.1 in 7 days. All HMBFA and CaHMB treatments reduced Salmonella counts to undetectable levels in 14 days. Deliephan et al. evaluated two commercial organic acid mixtures containing hydroxy‐4‐(methylthio) butanoic acid (HMTBa) at 2% and 1%, respectively, as a coating on kibble inoculated with Salmonella or E. coli O157:H7 (STEC). Salmonella counts were reduced by ca . 3 logs after 12 h and up to 4.6 logs after 24 h. STEC counts were also reduced by ca . 2 and 3 logs after 12 h and 24 h, respectively. Similarly, O'Bryan et al. evaluated a proprietary mixture of an organic acid blend consisting of 5%–25% nonanoic acid, 1%–25% butyric acid, and 1%–50% trans‐2‐hexenal on meat and bone meal (commonly used as a pet food ingredient) at 0, 1, 1.5, or 2 mL/kg of feed. Microbial analysis over time resulted in about 1 log reduction of Salmonella by 24 h and ca . 2 log reduction in 14 days. SBS is another GRAS acidulant approved for use as an additive in human and animal foods by the U.S. FDA. Due to its hygroscopicity and desiccant effect, SBS is found to be effective in killing pathogens such as Salmonella and Campylobacter (Dhakal & Aldrich, ; Line, ). SBS is commonly used in animal diets for the acidification of feline urine and for the preservation of soft‐moist treats and liquid digests. Dhakal et al. evaluated SBS, lactic acid, phosphoric acid, and combinations of butyric and propionic acids in rendered chicken fat (used to coat dry pet food kibbles) inoculated with Salmonella . SBS or lactic acid at 0.5% individually or a combination of SBS with propionic and butyric acid reduced Salmonella loads by >5.5 logs within 15 h in the chicken fat without negatively altering the shelf life of rendered fat (Dhakal et al., 2019). Dhakal and Aldrich evaluated the acidulants SBS, phosphoric acid, and lactic acid, individually and in combination with organic acids butyric and propionic acid in different fat types, namely, chicken fat, canola oil, Menhaden fish oil, lard, and tallow that are intended to coat dry pet food kibbles. The treated fats were inoculated with approximately 8 logs of Salmonella . SBS at 0.5%, phosphoric acid at 0.5%, and lactic acid at 0.25% individually and in combination with butyric acid at 0.075% and propionic acid at 0.05% reduced Salmonella loads below detectable limits within 2 h across all fats. The highest antibacterial efficacy was observed in Menhaden fish oil, with the Salmonella loads reduced to below detectable limits in less than 1 h. Fatty acids Medium and long‐chain fatty acids are considered effective antimicrobial feed additives in animal feed. Research by Cochrane et al. demonstrated the antimicrobial effects of medium‐chain fatty acids against Salmonella in rendered protein meals used in the animal feed industry. In pet food research, medium‐chain fatty acids, namely, caproic (C6), caprylic (C8), and capric (C10) acids, were evaluated by Dhakal and Aldrich as coating on dry kibbles inoculated with Salmonella . C6, C8, and C10 at 0.5%–1% reduced Salmonella levels by >4.5 logs after 5 h of treatment. A combination of C6 + C8 (0.25%–0.5%) reduced Salmonella levels to below the detection limit in 4 h, whereas C6 + C10 (0.25%–1%) and C8 + C10 (0.25%–1%) did the same in 2–4 h and 1–5 h, respectively, displaying potential synergism. Although the MCFA was effective against Salmonella in pet food, MCFA‐coated dry dog kibbles did not enhance the palatability of the diets, and dogs preferred control diets over the MCFA‐coated diets. On the other hand, fish oils are rich sources of long‐chain omega‐3 fatty acids and are used as human dietary supplements and as pet food ingredients. Menhaden fish oil is a long‐chain omega‐3 fatty acid and a popular commercial pet food ingredient rich in polyunsaturated fatty acids (PUFA). PUFAs from fish sources have shown antibacterial activity against several pathogenic microorganisms, including E. coli , S. aureus , and Salmonella (Chitra Som & Radhakrishnan, ). Dhakal and Aldrich evaluated Menhaden fish oil in vitro and as a coating on dry pet food kibble against Salmonella at different storage temperatures of 25°C, 37°C, and 45°C. Salmonella levels in the fish oil were below detection limits by 2 h at all temperatures. On the kibble, the fish oil had higher antimicrobial activity after 12 h at 25°C and after 2 h at 45°C, thus increasing with temperature. Overall, higher antimicrobial activity of the fish oil was observed at 37°C and 45°C throughout the experiment, indicating that higher holding temperatures could enhance the antimicrobial efficacy of Menhaden fish oil. Plant‐derived antimicrobials Plant‐derived antimicrobials (PDA), such as trans‐cinnamaldehyde, carvacrol, thymol, eugenol, and caprylic acid applied as vegetable oil or chitosan‐based antimicrobial spray on pet food kibble for reducing Salmonella were investigated by Chen et al. . All PDAs at 1% and 2% applied in vegetable oil or chitosan reduced Salmonella by at least 2 log CFU/g in 3 days compared with the control. Trans‐cinnamaldehyde at 2% was the most effective, with a 3–3.5 log CFU/g reduction of Salmonella during storage. Kiprotich et al. treated Salmonella ‐inoculated raw chicken breast meat with thyme oil at 0.5% (v/v) added into lemon juice and supplemented with Yucca schidigera extract, a natural emulsifier, at 23°C for 8 h. The 0.5% thyme oil treatment resulted in a 3.48 log reduction of Salmonella in 8 h. Boskovic et al. combined thyme oil treatment at 0.3% along with vacuum packaging on minced pork meat, a common ingredient of raw pet food stored under refrigeration at 3 ± 1°C for 15 days. About 1.7 log reduction of Salmonella counts was observed by 15 days. Similarly, Thanissery and Smith combined thyme oil and orange essential oil at 0.5% (v/v) each and achieved a 2.6 log reduction of Salmonella and a 3.6 log reduction of Campylobacter coli in chicken breast meat, another commonly used ingredient in raw pet food diet. A broader commercial application of PDA in pet foods could be its cost‐effectiveness. Additionally, as mentioned earlier, another limitation of some organic acids, acidulants, and fatty acids in pet foods is their impact on sensory attributes like taste, aroma, and flavor due to their low pH, high acid, and strong smell. Similarly, the strong smell and taste of PDA like essential oils are limitations in their application. In these cases, a “slow‐release mechanism” of these ingredients through encapsulation technologies could be an alternative. For instance, encapsulating organic acids with soluble and edible vegetable oil films allows for a slow release of the acid into the food product at a controlled rate, thereby minimizing organoleptic impact in terms of flavor and taste (Kiprotich & Aldrich, ). Ozone treatment Ozone treatment employs a chemical method where contaminated food products are exposed to ozone in either an aqueous or gaseous phase. When ozone molecules create oxidative reactive species, they rupture the cell wall and damage the cell wall proteins, enzymes, and nucleic acids (Brodowska et al., ; Cano et al., ). The excess ozone rapidly decomposes to oxygen, thus leaving no toxic residues in food. The treatment with ozone requires no thermal energy, making it suitable for heat‐sensitive products, and the exclusion of heat generation saves the need for input energy (Cano et al., ; Kaavya et al., ). Ozone has been used to reduce S . Typhimurium, E. coli , and Listeria innocua contamination in fruits and vegetables such as cilantro, strawberries, romaine lettuce, and tomatoes (Alexopoulos et al., ; Chandran et al., ; Chen et al., ; Gibson et al., ). In a study by Chandran et al. , the effectiveness of a spray and batch wash ozone system (5 ppm) against Salmonella and L. monocytogenes on surfaces of carrots, sweet potatoes, and butter squash commonly used in RMBD was evaluated. The batch wash system resulted in up to 1.56 CFU/mL mean microbial reduction; however, this was not significantly different from the control. Meanwhile, with the spray wash system, freeze‐tempered produce showed a higher bacterial reduction with 5 ppm ozone than the control but was not different from room temperature produce. Ozone gas is also used to decontaminate Aspergillus flavus spores in extruded pet foods. Silva et al. reported a reduction of up to 98.3% of inoculated spores after 120 min exposure to 40 or 60 µmol/mol ozone and 84% reduction after 30 min at 40 µmol/mol ozone. Salmonella decontamination of raw chicken parts using ozonated water has only seen minimal reductions (Cano et al., ). However, ozone‐based treatment of pet food products has not been extensively studied, leaving a research opportunity. REGULATORY MEASURES IN PET FOODS In the United States, pet foods are subject to extensive regulation, making it one of the highly regulated food products to which compliance with both federal and state regulations is mandatory (Pet Food Institute, ). As shown in Figure , there are four major bodies involved in pet food safety and quality. Firstly, the Association of American Feed Control Officials (AAFCO) provides guidelines for the production, labeling, and sale of pet food and related products and is not involved in testing, regulating, and certifying the pet food to ensure it meets standards (AAFCO, ). Secondly, the U.S. FDA regulates and monitors the safety of pet food products, including their ingredients, labeling, and production processes (FDA, ). FDA establishes and enforces standards for pet food under the Federal Food, Drug, and Cosmetic Act. The FDA conducts inspections, investigates complaints, and issues recalls if necessary to address safety concerns. The Center for Veterinary Medicine (CVM) is a branch of the FDA specifically focused on veterinary medicine, including the regulation of animal food and drugs. It works to ensure the safety and efficacy of products intended for animals, including pet food. The CVM conducts research, establishes regulations, and collaborates with other agencies and stakeholders to address emerging issues in the pet food industry. Each state department of agriculture has its own requirements, regulations, and jurisdiction regarding the label and safety of pet foods. With the passage of FSMA in 2011, new mandatory product safety standards are required for almost all U.S. pet food producers. The Preventive Controls for Animal Foods rule (21 CFR 507) requires pet food makers to implement current GMP (cGMPs), identify and evaluate biological, chemical, and physical hazards, develop and implement proper food safety plans detailing the steps they are taking to ensure product safety. Overall, the FSMA provisions to develop, implement, and maintain sanitary standards and robust verification activities, including environmental monitoring when needed, are expected to help reduce the burden of pet food contamination leading to product recalls and potential outbreaks. It is deemed unacceptable for pet foods to contain Salmonella , as such products would be classified as adulterated under CPG Sec. 690.800 (FDA, ). The same regulation is also observed in the European Union based on E.U. Commission Regulation No. 142/2011, Annex XIII. However, in Canada, the requirement about the presence of Salmonella is not implicit. The regulation of pet food by the state departments of agriculture is influenced by state laws, always in collaboration with federal laws. Although pet food is monitored by the FDA, it does not strictly enforce all the laws that are required for pet food and animal feed. Most of the states in the United States require pet food manufacturers to register every year for the foods and treats they sell within the state (APPA, ). Some states infrequently inspect adherence to labeling requirements and randomly test pet foods for microbial safety compliances, whereas some states only investigate consumer complaints along with the FDA. Although pet foods are monitored by the FDA, the frequency of routine inspections is done at least once every three years for domestic high‐risk facilities and at least once every five years for non–high‐risk facilities. They also conduct targeted inspections. However, this is conducted in relation to an outbreak, factors posing contamination risks, food consumption patterns, regional influences, trends, and compliance history of the manufacturer (FDA, ). Meat and by‐product meals known to be unfit for human consumption from USDA‐inspected meat processing facilities have been used by pet food and animal feed manufacturing plants in FDA‐inspected operations. In one notable example, the FDA initiated a recall of canned dog food from Evanger's Dog & Cat Food Company in 2017 due to phenobarbitol being traced back and found to have originated from a ‘USDA‐approved’ meat supplier. However, further investigation revealed that the lot was USDA labeled as “Inedible hand deboned beef‐ for pet food use only; Not fit for human consumption.” This indicates that pet food safety regulations are not as stringently enforced as human food regulations. CONCLUSION As the pet population and pet food market are increasing, they are boosted by the increasing humanization and premiumization of pets. Pet owners’ preferences and perceptions play a big role in the choice and type of feed a pet receives. However, as the owner's choice drives the new and emerging products and feed types in the pet food industry, it is crucial not to compromise the health of pet owners for the sake of pet food advancements. Given the recent rise in Salmonella ‐linked recalls in pet food, particularly RMBD, and the concurrent increase in human cases, especially among children and the elderly, a multifaceted approach is necessary. This approach should involve the pet food industry, consumer education, researchers, veterinarians, and policymakers to safeguard the health of both pets and their owners (Figure ). Despite the lack of standardized pathogen elimination steps in pet food production, several measures can mitigate Salmonella contamination. Prevention of cross‐contamination and postprocessing contamination in dry pet food can help to safeguard dry pet foods. The use of approved interventions such as processing aids, GRAS chemicals, and biological methods could be an alternative to mitigate Salmonella contamination in raw pet foods. Additionally, implementing current GMP (cGMPs) and proper Hazard Analysis Critical Control Point plans at manufacturing facilities can significantly enhance the safety of pet food products. Educating pet owners about the potential risks associated with product handling, cleaning, hand washing, and sanitation, as well as the risk of carrier pets and Salmonella , can help reduce the incidences of human Salmonella outbreaks linked to pet foods. Proper storage and handling of pet foods, maintaining appropriate temperature and relative humidity, and ensuring the quality and cleanliness of raw ingredients are essential practices for keeping pet food safe. Janak Dhakal : Conceptualization; writing–original draft; methodology; data curation; investigation; funding acquisition; project administration. Leslie Cancio : Investigation; writing–review and editing; validation; resources; data curation. Aiswariya Deliephan : Investigation; writing–review and editing; validation; resources. Byron Chaves : Review and editing; validation. Stephan Tubene : Review and editing; validation. This work is supported by the Capacity Building Grant [project award no. 2024‐38821‐42105], from the U.S. Department of Agriculture's National Institute of Food and Agriculture (USDA‐NIFA). The authors have no conflict of interest to declare.
Determination of the Microbial Shift in the Gingival Sulcus of Women during Each Trimester of Pregnancy: A Cross-Sectional Study
7d555340-7756-4308-8d55-a95faa47bbf7
11509706
Dentistry[mh]
The Fédération Dentaire Internationale (FDI) recently redefined oral health as a multifaceted condition that includes the ability to speak, smile, smell, taste, touch, chew, swallow, and convey a variety of emotions through facial expressions with confidence and with the absence of pain, discomfort, or craniofacial-complex disease. The definition further states that oral health is a component of overall health, including physical and mental well- being . The benefits of proper oral hygiene extend beyond avoiding dental caries and periodontal disease and include enhancing a person’s general state of health . Multiple studies have shown a connection between a person’s overall systemic health and dental health . The dental health of women must be given particular attention. Different physiological situations, including adolescence, pregnancy, and menopause, should be considered, since they affect women’s general health status . Pregnant women’s dental health is paramount, since it directly affects the expectant mother and the unborn child’s future . Pregnancy causes generalized changes in a woman’s body due to the progressive cycle of hormonal influences . The most common conditions affecting periodontal health include gingivitis and periodontitis. It has been reported that at least 35% of pregnant women are usually affected by gingivitis . Moreover, studies have shown ecological shifts in supragingival microbiota in pregnant women affecting the mother’s gingival health and the child’s growth and development . Many pathogens may contribute to periodontal diseases, mainly gram-negative bacteria, including red, orange, and green complex bacteria. The red complex comprises Porphyromans gingivalis , Tannerella forsythia , and Treponema denticola . In comparison, the orange complex consists of Prevotella intermedia and Fusobacterium nucleatum . The green complex consists of Eikenella corrodens and Capnocytophage species, with Aggregatibacter Actinomycetemcomitans in a separate category (purple) . Researchers found that clinical periodontal diagnosis positively correlated with quantifying the sub-gingival microbiota at different trimesters of pregnancy. Moreover, it was observed that the microbial species Tanerella forsythia was common during the first trimester of pregnancy, but its abundance decreased significantly toward the third trimester . Sex hormones can influence or alter the oral microbiome’s compositions, leading to shifts in immune responses and dysbiosis. As a result, periodontal infections in pregnant women may cause severe systematic gingival inflammatory responses. Further, it was noted that factors such as low birth weight, pre-eclampsia, other pregnancy complications, activation of the maternal inflammatory cell responses, cytokine release, and dysbiosis in the oral microbiota play a possible role in causing gingival complexities during different trimesters of pregnancy . During pregnancy, the oral microbiome undergoes a pathogenic change that reverts to a baseline or “healthy microbiome” during the postpartum period, with female sex hormones such as progesterone and estrogen thought to mediate this shift . To improve prediction and intervention strategies for unfavorable pregnancy outcomes, it is essential to comprehend the oral microbiota alterations that occur throughout pregnancy and their relationship with adverse pregnancy outcomes. Despite the studies on oral microbiota shifts during pregnancy trimesters, there is no agreement about the true nature of microbiome changes during pregnancy, since contradictory and surprising data have been reported. Hence, the present study is required to understand how the gingival microbiota shifts occur at various stages of pregnancy. Thus, this study aims to identify different types of bacterial species from gingival sulcus during different pregnancy trimesters in women visiting obstetrics/gynecology centers in Riyadh City, Saudi Arabia. This study was conducted among pregnant women attending the gynecologic/obstetric center at Alyamamah Hospital, Riyadh, Saudi Arabia. All the study participants underwent a clinical periodontal examination and gingival crevicular fluid sampling in the first, second, and third trimesters with a control group of nonpregnant ladies. The study proposal was submitted to the research and innovation center of Riyadh Elm University, and approval was obtained (FPGRP/2021/645/693/680). Alyamamah Hospital, Riyadh authorities were contacted, and a written agreement was made to conduct the study. Before the investigation, the purpose of the study was described to the participants, and signed informed consent was acquired. The sample size was calculated by compromise power analysis in G* Power considering an effect size of (Chi-Square tests) 0.5, alpha error probability of 0.05, and power of the study 0.95. It resulted in a sample size of 109, which was adjusted to the nearest number of 110. Ninety out of the total were pregnant, and 20 were nonpregnant women. The study included patients who are Saudi nationals and pregnant women in all trimesters. Patients with systemic diseases and young women under the age of 15 were excluded from the study. The study excluded individuals receiving corticosteroid therapy and heparin, those who had undergone systemic antibiotics in the previous four weeks, and patients with deep periodontal pockets. Data collection: A ten-item questionnaire derived from previous research and clinical oral examination was used to collect primary data. The questionnaire briefly described the study, an invitation to participate, and a consent form to be signed. The questionnaire comprised two sections. Section one enquired about the socioeconomic background (age and education). Section two recorded practices related to gingival health: tooth-brushing frequency (No brushing, Once/day or more), smoking (yes/no/in the past), daily use of sugary drinks and sugary foods (yes/no), and pregnant frequency (no pregnancy, once or more). The clinical examination: A single-trained periodontic resident performed all oral examinations by strictly following an infection control protocol while recording plaque and gingival index scores. Dental plaque was recorded using Silness and Loe’s (1964) plaque index, and the gingival index was provided by Loe and Silness (1963) . Microbiological Analysis: Samples’ collection and bacterial isolation: The sample collection and clinical examination were completed on the same day, and the samples were immediately sent to the laboratory for incubation. Samples were collected using sterile absorbent paper points (META BIOMED-KOREA) size 30 (Guentsch et al. 2011) from the gingival sulcus of the mesial side of the maxillary right second premolar (Kornman 1980) of pregnant females during the first, second, and third trimesters, along with a control group of nonpregnant females. The sterile absorbent paper points were placed in the gingival sulcus until light pressure was felt for thirty seconds (Bunaes et al. 2017) and then transferred to thioglycolate broth used to grow anaerobic bacteria. Thioglycolate broth containing the absorbent paper points was incubated at 37 °C for 24–48 h. After growth, the microorganisms were subjected to Gram staining to differentiate Gram-positive and Gram-negative bacteria. Subsequently, the subculture was performed using a sheep blood agar plate with anaerobic gas using an anaerobic jar Kit (BR0038) Thermo Fisher Scientific (Waltham, MA USA)] and was transferred into an incubator at 37 °C for 24–48 h. Bacterial identification: The conventional identification of the bacterial species was carried out by an experienced microbiologist from the department of basic sciences at Riyadh Elm University, Riyadh, Saudi Arabia. In addition, the VITEK 2 system (Biomerieux-Durham, NC, USA) was used for fast and accurate microbial identification. Based on the VITEK 2 system, bacterial species were identified in Anfas Alraha’s private laboratory in Riyadh city, Saudi Arabia. The VITEK 2 system identified various types of bacterial species from the gingival sulcus of pregnant women at different trimesters of pregnancy and among nonpregnant women. The VITEK 2 system offers an extensive identification and susceptibility menu and an expanded identification database, and it is the most automated platform available. Reagent Cards used with VITEK 2 machine: The reagent cards have 64 wells that can each contain an individual test substrate. Substrates measure various metabolic activities, such as acidification, alkalinization, enzyme hydrolysis, and growth in the presence of inhibitory substances. For inoculation, each card has a pre-inserted transfer tube. The card’s bar codes contain information on product type, lot number, expiration date, and a unique identifier, which can be associated with the sample before or after loading the card into the system. There are currently four reagent cards available for identifying different organism classes, which are as follows: GN: Gram-negative fermenting and non-fermenting bacilli; GP: Gram-positive cocci and non-spore-forming bacilli; YST: Yeasts and yeast-like organisms; and BCL: Gram-positive spore-forming bacilli. Culture Required: The culture of bacteria is needed according to the organisms isolated, such as blood Agar, Columbia blood agar, neomycin blood agar, etc. Suspension Preparation: Transfer a sufficient number of colonies of a pure culture using a sterile swab or applicator stick, then suspend the microorganism in 3.0 mL of sterile saline (aqueous 0.45% to 0.50% NaCl, pH 4.5 to 7.0) in a 12 mm × 75 mm clear plastic tub. Inoculation: An integrated vacuum apparatus inoculates identification cards with microorganism suspensions. Place a test tube containing the microorganism suspension into a cassette rack, insert the transfer tube into the corresponding suspension tube, and place the identification card in the corresponding slot. Once you apply the vacuum and reintroduce air into the station, the transfer tube forces the organism suspension into micro-channels, filling all the test wells. Card Sealing and Incubation: A mechanism passes inoculated cards, cutting off the transfer tube and sealing the card before loading it into the carousel incubator. The carousel incubator can store up to 30 or 60 cards. All card types are incubated online at 35.5 + 1.0 °C (8 to 12 h). Test Reactions: Calculations are performed on raw data and compared to thresholds to determine reactions for each test. On the VITEK 2 Compact, test reaction results appear as “+”, “−”, “” or “”. Reactions that appear in parentheses are indicative of weak reactions that are too close to the test threshold. Database Development: Large strain sets of well-characterized microorganisms tested under various culture conditions form the databases for the VITEK 2 identification products. These strains are derived from a variety of clinical and industrial sources as well as from public (e.g., ATCC) and university culture collections. The bacterial species identified are shown in . Statistical analysis: Descriptive statistics of frequency distribution and percentages were calculated for the categorical variables (personal characteristics, oral health-related variables, and bacterial species). The mean score/ranks and standard deviation values were obtained for continuous variables (plaque and gingival scores). Chi-square and Fisher’s exact tests were applied to assess the relationship between pregnancy status and personal characteristics, different bacterial species in pregnant and nonpregnant women, bacterial species in nonpregnant and in different pregnancy trimesters, different bacterial species and pregnancy frequencies, different pregnancy trimesters and Gram stain bacteria, pregnancy frequencies and Gram stain bacteria, and Gram stain bacteria in nonpregnant and pregnant women. Normality tests indicated a non-normal distribution of the PI and GI scores. Hence, a nonparametric Mann–Whitney U test was applied to compare mean PI and GI score/ranks between pregnant and nonpregnant women, while Kruskal–Wallis’s test was applied to compare mean PI and GI score/ranks across pregnancy frequencies, pregnancy trimesters and different bacterial types. A value of p < 0.05 was considered significant for all the statistical tests. All the data were analyzed using a statistical analysis package (IBM-SPSS version 25, Armonk, NY, USA). A total of 110 women ( n = 110) visiting Alyamamah hospital gynecology/obstetric center participated in this study. The characteristics of the study participants are shown in . Of the 110 study participants, 90 (81.8%) were pregnant and 20 (18.2%) were non-pregnant. The majority of participants, 96 (87.3%), were in the 26–40 age group, 107 (97.3%) weighed 61–100 kg, and 105 measured 140–180 cm in height (95.5%). More than half of the study participants had a university level of education (64: 58.2%), 74 (67.3%) avoided sugar drinks, and none were smokers. The oral health-related variables indicated that most of the study participants brushed their teeth once per day (73: 66.4%) and that 87 (79.1%) did not perform oral disinfection. The reported mean ± SD values of plaque and gingival scores were 1.59 ± 0.63 and 1.68 ± 0.60, respectively. The pregnancy-related variables and the bacterial species characterized in this study are shown in . More than half of the study participants had two or more pregnancies. Nearly 65 (59.1%) and 25 (22.7%) participants were first-time pregnant. Most of the study participants were in their third trimester of pregnancy (37: 33.6%), followed by the second (35: 31.8%) and first trimester (18: 16.4%). The bacterial characterization indicated that Actinomyces naeslundii (AN) was the most predominant bacteria found in the study participants, followed by Lactobacillus fermentum (LF) (23.6%), Veillonella (VL) (10%) and Unidentified organisms (9.1%), while Lactobacillus plantarum (LP) (0.9%) was the least prevalent bacteria in the study sample . The majority of the bacteria observed in the study sample were Gram-positive (83: 75.5%). The relationship between pregnancy status and the personal characteristics of the study participants is shown in . The pregnant and nonpregnant women did not differ significantly in age ( p = 0.281), weight ( p = 0.408), height ( p = 0.195), and sugar drink intake ( p = 0.196). However, pregnant and nonpregnant women differed significantly across educational levels ( p = 0.029) and tooth-brushing frequency ( p < 0.001). Pregnancy and gingival sulcular microbiota: The characterization of subgingival bacterial species of pregnant and nonpregnant women is shown in . AN 40 (36.4%) was the predominantly observed bacterial species (as shown in ) among the studied sample. The pregnant women’s subgingival sulcus predominantly demonstrated AN (40: 44.4%), followed by LF (17: 18.9%), VL (11: 12.2%), PD (6: 6.7%), BB (4: 4.4%), UO (4: 4.4%), LA (3: 3.3%), LG (1: 1.1%), CL (1: 1.1%) and LP (1: 1.1%). In contrast, nonpregnant women showed mainly LF (9: 45%), followed by UO (6: 30%), BB (2: 10%), LA (1: 5%), AO (1: 5%), and CL (1: 5%). However, LG, AN, PD and VL were absent in the gingival sulcus of nonpregnant women. When the presence of subgingival bacterial species was compared between pregnant and nonpregnant women, a statistically significant difference was observed ( p < 0.001). The bacterial species in nonpregnancy and during the different pregnancy trimesters is shown in . LF was the predominant bacteria in nonpregnant (9: 45%) women and pregnant women in the first pregnancy trimester (8: 44.4%). However, AN became the predominant bacteria during the second (17: 48.6%) and third pregnancy (17: 45.9%) trimesters. The following are the bacterial species found across various trimesters: LF 8 (44.4%) is the most abundantly found bacterial species during the first trimester of the pregnancy, followed by AN (6: 33.3%) and others. In comparison, AN (17: 48.6%) and LF (7: 20%) were commonly found during the second trimester of pregnancy, while AN (17: 45.9%) and VL (9: 24.3%) were the most prevalent bacterial species observed in the third trimester of the pregnancy. A statistically significant difference was observed when the prevalence of various bacterial species was compared across the three pregnancy trimesters ( p = 0.010). The relationship between different bacterial species and pregnancy frequencies is as follows: The bacterial species AN was large in first-time pregnant women (15: 60%) and in those with two or more pregnancies (25: 38.5%). However, bacterial species LG, CL, BB, and LP were absent in first pregnancies compared to those with two or more pregnancies. The presence of different bacteria did not differ significantly between first-time pregnant women and those with two or more pregnancies ( p = 0.568). The distribution of Gram stain bacteria in the different trimesters of pregnancy was examined. We observed a large proportion of Gram’s positive bacteria in the first trimester (16: 88.90%), second trimester (30: 85.70%), and third trimester (23: 62.20%). On the other hand, we reported Gram’s negative bacteria in the first (2: 11.10%), second (3: 8.60%), and third (14: 37.5%) trimesters of pregnancy. In addition, 2 (5.70%) unknown bacteria were found in the second trimester of pregnancy. When sulcular bacterial species were assessed based on Gram’s staining across the three pregnancy trimesters, a statistically significant difference was observed ( p = 0.007). The distribution of Gram stain bacteria was observed in first-time pregnant women and in those with two or more pregnancies. Gram-positive bacteria were predominant in first-time pregnant women (21: 84%) and in those with two or more pregnancies (48: 73.8%). In contrast, Gram-negative bacteria numbered 3 (12%) and 16 (24.6%) in first-time pregnant women and in those with two or more pregnancies. A minor percentage of unknown bacteria was observed in both groups. A comparison of bacteria based on Gram stain showed no significant difference between the first-time pregnancy and two or more pregnancies ( p = 0.260). Clinical Parameters and Pregnancy The comparison of PI scores between pregnant (1.59 ± 0.60) and nonpregnant women (1.60 ± 0.75) showed no significant difference ( p = 0.900). Similarly, GI between pregnant (1.67 ± 0.58) and nonpregnant women (1.75 ± 0.72) showed no significant difference ( p = 0.710). The mean PI and GI scores did not differ significantly across pregnancy frequency ( p = 0.244, p = 0.443) and pregnancy trimesters ( p = 0.256, p = 0.392). PI varied significantly across different bacterial species ( p = 0.036). However, no significant difference was observed with the GI score ( p = 0.378). The comparison of PI scores between pregnant (1.59 ± 0.60) and nonpregnant women (1.60 ± 0.75) showed no significant difference ( p = 0.900). Similarly, GI between pregnant (1.67 ± 0.58) and nonpregnant women (1.75 ± 0.72) showed no significant difference ( p = 0.710). The mean PI and GI scores did not differ significantly across pregnancy frequency ( p = 0.244, p = 0.443) and pregnancy trimesters ( p = 0.256, p = 0.392). PI varied significantly across different bacterial species ( p = 0.036). However, no significant difference was observed with the GI score ( p = 0.378). The primary purpose of this research was to make a qualitative assessment of gingival sulcular microbiota in terms of bacterial species or type during different pregnancy trimesters. Secondly, it was to identify the microbial shift occurring in different trimesters of pregnancy compared to nonpregnant women. It has been reported that different types of organisms were identified in all the samples from pregnant and nonpregnant women . In the present study, bacterial species were compared between pregnant and non-pregnant women, and a statistically significant difference was observed. Specific strains of LG, AN, PD and VL were found only in pregnant women compared to nonpregnant women. Hence, the null-hypothesis according to which there would be no microbial shifting in the gingival sulcus occur during different pregnancy trimesters has been rejected. In this study, the gingival sulcus of pregnant women demonstrated a remarkable increase in AN. However, previous studies have reported an increase in the proportion of Bacteroides intermedius, Clostridium bifermentans in the gingival sulcus of pregnant women . Haffajee et al. reported that there were differences in the prevalence of subgingival bacterial species by country; for instance, Prevotella melaninogenica was identified in about 6% of Chilean and Swedish patients but not in subjects from other countries. Also, Veillonella parvula was detected at a higher percentage among Americans in comparison to Swedish subjects . Another study reported subgingival bacterial profile changes over the course of pregnancy as well as after delivery . Moreover, AN continues to be the significantly predominant bacteria in second and third trimesters of pregnancy compared to the first trimester of pregnancy. In contrast, LF was predominantly observed in the gingival sulcus of nonpregnant and pregnant women in the first trimester of pregnancy. Thus, a clear microbial shift was observed from the first trimester of pregnancy to the second and third trimesters. In addition, when bacterial species were compared between first-time pregnancy and two or more pregnancies, AN was found to be the predominant bacteria, with no significant difference. It is speculated that the microbial shift could be attributed to the hormonal variations observed during the pregnancy, since previous studies have indicated that pregnancy hormones seem to be capable of altering the normal subgingival bacterial flora and subgingival ecology . In our study, three-fourths of the study participants demonstrated the presence of gram-positive (LF, LA, LG, AN, AO, CL, BB and LP) bacterial species. As the pregnancy progressed from the first trimester to the third trimester, gram-positive species were replaced by gram-negative (PD and VL) species. This bacterial transition was statistically significant, with the presence of some unknown bacterial species. This finding can be corroborated with the study by Kornman and Loesche (1980) , who reported that gram-negative anaerobic rods accounted for approximately 10% of the total flora during the first trimester. Gram-negative anaerobic rods had increased to 39% of the flora during the second trimester of pregnancy. The majority of studies, however, focused on what Socransky et al. called the orange complex ( Peptostreptococcus micros , Prevotella nigrescence , Fusobacterium nucleatum , and Prevotella intermedia ) and the “red complex” ( Campylobacter rectus , Tannerella forsythia , Treponema denticola , and Porphyromonas gingivalis ) . These organisms may be entirely missing from the research participants, or Vitek 2 may have overlooked them in error (unknown organisms). Gram-positive and gram-negative bacterial species did not differ significantly across pregnancies frequencies. In this study, the plaque score did not show any significant difference between the pregnant and nonpregnant participants. The plaque scores gradually increased as the pregnancy trimester progressed, with the highest plaque score being observed during the third trimester. When plaque scores were compared across different pregnancy trimesters and nonpregnant women, there was no significant difference found. This finding is in accordance with the studies that reported a gradual increase in the plaque score in the first, second and third trimester of pregnancy . In contrast, a few studies observed no fluctuation in plaque scores throughout pregnancy and in non-pregnant women . It can be speculated that the free prenatal dental counselling, educational level and self-motivation of pregnant women could have contributed to better oral hygiene behavior and the insignificant differences in plaque scores. Similarly, women with ≥2 pregnancies showed a high plaque score, which did not differ significantly across first-time pregnant and nonpregnant women. In addition, plaque scores demonstrated a significant difference across the presence of bacterial species. The study participants showed the highest plaque scores in the presence of LG and CL, while the lowest score was found with AO and PD. The present study has shown varying degrees of gingival inflammation between pregnant and nonpregnant women, with an insignificant difference. Similarly, gingival score varied across nonpregnant women and pregnant women in different trimesters, with a gradual increase in the gingival scores. In line with this study, Jonsson et al. (1988) found no significant difference in the gingival health of pregnant and post-partum women . In contrast to this study, several reports have shown a statistically significant difference in gingival inflammation between nonpregnant and pregnant women in different trimesters . The variation in the observed gingival score could be due to the fact that the prevalence, extent and severity of gingival inflammation during pregnancy differ considerably among various reported studies. Methodological heterogeneity may, at least in part, explain differences in the obtained results. Cross-sectional studies in comparison to longitudinal studies hamper the analysis of the relationship between pregnancy and the exacerbation of gingival inflammation . Other factors that vary within different research groups may have affected the wide range of results obtained, including the use of different clinical indices, study designs, measurement equipment and the control of confounding factors . Strengths & Limitations of the Study This is the first study carried out in Riyadh, and not many are conducted around the world. The subjects in the pregnant and nonpregnant groups were recruited from one single health-care center, and they were assumed to be homogenous in their ethnicity and socio-economic status. Since younger or older women may have hormonal fluctuations other than pregnancy, such as puberty or menopause, we limited the age range of the present study participants to 18–40 years. Limiting the age range prevented these variables from potentially influencing the periodontal condition of the pregnant study population, thereby enhancing the strength of the results. Even though the study demonstrated positive strengths, the current study possessed some limitations. The study was restricted to the assessment of the gingival crevicular microbiota and clinical indices on plaque and gingival inflammation. Hence, other periodontal parameters, such as pocket depth, alveolar bone loss, gingival biotype and clinical attachment levels, have not been recorded. No attempt was made to estimate salivary, blood or tissue concentrations of the hormone levels, especially estrogen and progesterone. The study sample included a large number of pregnant women and only a few nonpregnant subjects. Despite the limitations, the present study provided an insight into the gingival crevice microbiota in pregnant women. Future longitudinal follow-up studies of pregnant women, with a large sample size based in multiple centers, are required to confirm the current study findings. An estimation and correlation of the hormonal levels, gingival sulcular microbiota, and clinical periodontal parameters, including gingival biotype, are required to establish an objective relationship between variables. This is the first study carried out in Riyadh, and not many are conducted around the world. The subjects in the pregnant and nonpregnant groups were recruited from one single health-care center, and they were assumed to be homogenous in their ethnicity and socio-economic status. Since younger or older women may have hormonal fluctuations other than pregnancy, such as puberty or menopause, we limited the age range of the present study participants to 18–40 years. Limiting the age range prevented these variables from potentially influencing the periodontal condition of the pregnant study population, thereby enhancing the strength of the results. Even though the study demonstrated positive strengths, the current study possessed some limitations. The study was restricted to the assessment of the gingival crevicular microbiota and clinical indices on plaque and gingival inflammation. Hence, other periodontal parameters, such as pocket depth, alveolar bone loss, gingival biotype and clinical attachment levels, have not been recorded. No attempt was made to estimate salivary, blood or tissue concentrations of the hormone levels, especially estrogen and progesterone. The study sample included a large number of pregnant women and only a few nonpregnant subjects. Despite the limitations, the present study provided an insight into the gingival crevice microbiota in pregnant women. Future longitudinal follow-up studies of pregnant women, with a large sample size based in multiple centers, are required to confirm the current study findings. An estimation and correlation of the hormonal levels, gingival sulcular microbiota, and clinical periodontal parameters, including gingival biotype, are required to establish an objective relationship between variables. Actinomycetes naeslundi was the predominant bacteria observed in the study. Lactobacillus fermentum is the most common bacteria found in nonpregnant women and pregnant women during the first trimester of pregnancy. However, Actinomycetes naeslundi was the main bacterial species found in the pregnant women during the second and third trimesters of pregnancy. During pregnancy, we observed a significant shift in the bacterial microbiota, potentially due to hormonal changes. Moreover, Actinomycetes naeslundi remains the main bacterial species, regardless of pregnancy frequency (first-time pregnancy or two or more pregnancies). All the pregnancy trimesters observed significant changes in gram-positive and gram-negative bacteria. Different trimesters of pregnancy revealed a notable microbial shift in the gingival crevices of pregnant women, without any impact on gingival inflammation.
Time–frequency time–space LSTM for robust classification of physiological signals
b90ad72f-6ccb-4a9d-9045-e3d11a8b8778
7994826
Physiology[mh]
Analysis and classification of clinical time-series data in physiology and disease processes are considered as a catalyst for biomedical research and education. Innovative computerized tools for physiological data classification are increasingly needed to facilitate investigations on new unsolved challenging problems in clinical and life sciences with respect to both basic and translational perspectives. Conventional methods for classification of physiological time series to detect abnormal conditions include fractals, chaos, nonlinear dynamics, signal coding, pattern matching, and machine learning. The current surge of modern artificial intelligence (AI) opens a new approach for sequential data classification with long short-term memory (LSTM) networks , which are an architecture of deep learning. LSTM networks are a type of recurrent neural networks that learn order dependence in sequential data. There are many methods developed for classification of time series in different fields of applications. Time-series classification algorithms based on discriminatory features can be categorized into six main groups : (1) whole series, (2) intervals, (3) shapelets, (4) dictionary, (5) combinations, and (6) model. For the whole-series approach, classification is performed by comparing the similarity between two time series using a distance measure. The methods of intervals choose one or multiple intervals of the series and use summary measures as features for classification. The methods of shapelets define a class with phase-independent patterns called shapelets, then a class is identified by the existence of one or more shapelets in the whole time series. The dictionary-based methods classify time series based on the frequency of its recurring subseries. The methods of combinations try to combine two or more methods of the whole series, intervals, shapelets, and dictionary for classification. The model-based methods fit a time series to mathematical models constructed for the classes and then assign the time series to the class that has the largest similarity score given by the class model. Most recently, deep-learning methods or deep neural networks have been reported to outperform many baseline time-series classification approaches and appear to be the most promising techniques for classifying temporal data . Because LSTM networks can capture long-term temporal dependencies, they have been applied to provide solutions for many difficult problems in bioinformatics and computational biology . As a state-of-the-art method for learning physiological models for disease prediction, many applications of LSTM and other deep-learning networks have recently been reported in literature, such as classifying electroencephalogram (EEG) signals in emotion, motor imagery, mental workload, seizure, sleep stage, and event related potentials , non-EEG signals in Parkinson’s disease (PD) , learning and synthesis of respiration, electromyograms, and electrocardiograms (ECG) signals , decoding of gait phases using EEG , and early prediction of stress, health, and mood using wearable sensor data . The present work presents a time–frequency time–space LSTM tool for robust and efficient classification of physiological time series, while solutions obtained from conventional LSTM networks would result in lower accuracy and higher data training time. Furthermore, for the case of clinical gait analysis with the use of measurement sensors to assess biomechanical patterns and therapeutic plan for rehabilitation in patients disabled from conditions such as PD and post stroke, long walk trials are recommended to obtain at least 370 strides . Such long-distance walks result in long records of physiological measurements, cause discomfort to the patients, and may be impractical to perform in many clinical settings . Differentiating patients with PD from healthy controls using gait data was studied in , which trained fuzzy neural networks with wavelet features extracted from the gait data. Another study extracted gait features with the short-time Fourier transform and used the support vector machines (SVMs) for the classification task . To capture the local changes in the dynamics of gait signals, the feature-extraction method of shifted 1-D local binary patterns and a multilayer perceptron, which is a class of feed-forward artificial neural networks, were used for the classification of PD and healthy controls . The extraction of time-domain and frequency-domain features of gait data for training with random decision forests, which are an ensemble machine-learning method for classification, was reported in a more recent study for detecting patients with PD . All these studies employed shallow neural networks or SVMs. However, deep neural networks are known to be the most advanced models of the neural-network approach and shown to be of performance superior to other types of statistical classifiers . The novel idea for classification of physiological data with LSTM presented herein is the creation of complementary time–frequency and time–space features of time series. In signal processing, instead of viewing a time series as a one-dimensional signal, time–frequency analysis studies a signal in both time and frequency domains simultaneously by some function whose domain is the two-dimensional real plane to extract transient features from the signal by a time–frequency transform. Time–frequency signal processing for feature extraction was reviewed as a useful approach for pattern recognition that provided successful applications, including EEG seizure detection and classification , classification of ultra-high-frequency signals , classification of vibration events , and classification of EEG signals and episodic memory . In nonlinear dynamics, the time–space analysis attempts to transform one-dimensional signal into a two-dimensional space to enable the visualization of the recurrences of states of a dynamical system at certain times and enable the extraction of distinctive features representing behaviors of different dynamical mechanisms underlying nonlinear time series. The extraction of novel features from time series not only facilitate the power of signal compression for deep learning but also enhances the capability of LSTM networks for robust signal classification. In chaos theory, the method of recurrence plots (RPs) was developed for nonlinear time-series analysis . RPs and extended methods were further addressed for the analysis of complex systems , and dynamical features of nonlinear time series . While an RP is a binary visualization of recurrences of states of a dynamical system at certain pairs of time, a fuzzy recurrence plot (FRP) displays the visualization as a grayscale image. Because of being much richer in texture than RPs, the technique of FRPs of time series is a preferred approach for texture analysis and has been successfully applied to extract texture features for pattern recognition, including classification of PD and control subjects using deep learning , , tensor decomposition , and SVMs ; and other neuro-degenerative diseases . In general, the time–frequency analysis is known a preferred approach for the representation and essential feature extraction of non-stationary signals because it is effective for estimating the underlying characteristics composing the signals , whereas the time–space analysis provides another kind of visual information about the signals by detecting hidden dynamical features being inherent in the data. The combination of complementary features generated by both time–frequency and time–space analysis methods is therefore promising for enhancing the classification power of the sequential deep learning. ECG data ECG signals capture the electrical activity of a human heart over a period of time. ECG signals are used by physicians for examining the condition of a patient’s heartbeat to detect if the condition is normal or irregular. Atrial fibrillation (AF) is a type of irregular heartbeat that occurs when the upper chambers of the heart (atria) beat out of coordination with the lower chambers (ventricles). The ECG data used in this study are publicly available from the PhysioNet: The Research Resource for Complex Physiologic Signals . The data consist of ECG signals sampled at 300 Hz and classified by a group of experts into normal sinus rhythm, AF, alternative rhythm, and noise. The purpose of the creation of this challenging database was to call for the development of new methods for classifying these types of cardiac arrhythmias. Information about the number of participants in the recordings of normal rhythm, AF rhythm, and other rhythms is not available from the data source . Gait in Parkinson’s disease data The Gait in Parkinson’s Disease database consists of time series of vertical ground reaction force in Newtons of gait dynamics from 93 patients with idiopathic PD and 73 healthy controls. This database is also publicly available from the PhysioNet: The Research Resource for Complex Physiologic Signals . The data consist of the vertical ground reaction force (in Newtons) signals of the subjects as they walked at their usual, self-selected pace for approximately 2 minutes on level ground. The force was measured as a function of time with 8 sensors placed underneath each foot. The force signals of each of the 16 sensors placed under the two feet of each subject were digitized and recorded at 100 samples per second. ECG signals capture the electrical activity of a human heart over a period of time. ECG signals are used by physicians for examining the condition of a patient’s heartbeat to detect if the condition is normal or irregular. Atrial fibrillation (AF) is a type of irregular heartbeat that occurs when the upper chambers of the heart (atria) beat out of coordination with the lower chambers (ventricles). The ECG data used in this study are publicly available from the PhysioNet: The Research Resource for Complex Physiologic Signals . The data consist of ECG signals sampled at 300 Hz and classified by a group of experts into normal sinus rhythm, AF, alternative rhythm, and noise. The purpose of the creation of this challenging database was to call for the development of new methods for classifying these types of cardiac arrhythmias. Information about the number of participants in the recordings of normal rhythm, AF rhythm, and other rhythms is not available from the data source . The Gait in Parkinson’s Disease database consists of time series of vertical ground reaction force in Newtons of gait dynamics from 93 patients with idiopathic PD and 73 healthy controls. This database is also publicly available from the PhysioNet: The Research Resource for Complex Physiologic Signals . The data consist of the vertical ground reaction force (in Newtons) signals of the subjects as they walked at their usual, self-selected pace for approximately 2 minutes on level ground. The force was measured as a function of time with 8 sensors placed underneath each foot. The force signals of each of the 16 sensors placed under the two feet of each subject were digitized and recorded at 100 samples per second. Instantaneous frequency The instantaneous frequency (IF) of a non-stationary signal is a time-varying parameter that relates to the average of the frequencies f present in the signal as it evolves over time instants t , . The IF function estimates the IF of a signal at a sampling rate by computing the spectrogram power spectrum P ( t , f ) and estimating the IF as 1 [12pt]{minimal} $$ IF(t) = ^{ } f P(t,f) df}{ _{- }^{ } P(t,f) df}. $$ I F ( t ) = ∫ - ∞ ∞ f P ( t , f ) d f ∫ - ∞ ∞ P ( t , f ) d f . The power spectrum is a mathematical expression of the amount of the signal at a frequency f . For a periodic signal, peaks at the fundamental frequency and its harmonics are observed at the spectrum; for a quasiperiodic signal, peaks at linear combinations of related frequencies observed; and a chaotic signal yields broad band components to the spectrum. In practice, the exact solution for the power spectrum cannot be determined because a signal x ( t ) is not infinitely long but measured over a finite interval [12pt]{minimal} $$0 t T$$ 0 ≤ t ≤ T . Therefore, the power spectrum needs to be numerically estimated. A method for estimating the power spectrum of a time series [12pt]{minimal} $$x_k$$ x k , [12pt]{minimal} $$k = 0, , N-1$$ k = 0 , ⋯ , N - 1 is described as follows. The spectral density of a time series of length N can be approximated as 2 [12pt]{minimal} $$ P_N(f) = | _{k=0}^{N-1} x_k e^{-i 2 fk t} |^2, $$ P N ( f ) = Δ t N ∑ k = 0 N - 1 x k e - i 2 π f k Δ t 2 , where [12pt]{minimal} $$ t$$ Δ t is the sampling interval. If the spectral value is calculated at [12pt]{minimal} $$f = j f$$ f = j Δ f , where [12pt]{minimal} $$ f = 1/(N t)$$ Δ f = 1 / ( N Δ t ) , and [12pt]{minimal} $$ t = 1$$ Δ t = 1 , then 3 [12pt]{minimal} $$ P_j = | _{k=0}^{N-1} x_k e^{-i 2 } |^2 = |X_j |^2, $$ P j = 1 N ∑ k = 0 N - 1 x k e - i 2 π jk N 2 = 1 N X j 2 , which indicates the discrete Fourier transform (DFT), [12pt]{minimal} $$X_j$$ X j , as 4 [12pt]{minimal} $$ X_j = _{k=0}^{N-1} x_k e^{-i 2 }, j = 0, , N-1. $$ X j = ∑ k = 0 N - 1 x k e - i 2 π jk N , j = 0 , ⋯ , N - 1 . However, it was proved that the power spectrum estimate expressed in Eq. is not properly scaled . Therefore, the estimate is modified as 5 [12pt]{minimal} $$ P_j= & {} | _{k=0}^{N-1} w_k x_k e^{-i 2 } |^2, j = 0, , N-1; $$ P j = 1 WN ∑ k = 0 N - 1 w k x k e - i 2 π jk N 2 , j = 0 , ⋯ , N - 1 ; in which 6 [12pt]{minimal} $$ W = _{j=0}^{N-1} w_j^2, $$ W = 1 N ∑ j = 0 N - 1 w j 2 , where [12pt]{minimal} $$w_j$$ w j , [12pt]{minimal} $$j= 0, , N-1$$ j = 0 , ⋯ , N - 1 , are the weights or coefficients of a window function (the Kaiser window is applied in this study). The estimate of [12pt]{minimal} $$P_j$$ P j expressed in Eq. using the fast Fourier transform (FFT) can be sequentially carried out as follows . Truncate the time series or pad with zeros so that [12pt]{minimal} $$N=2^n$$ N = 2 n , where n is a positive integer. Weight the time series with a window function. Calculate the DFT of the weighted time series [12pt]{minimal} $$(w_k x_k)$$ ( w k x k ) using the FFT. Calculate [12pt]{minimal} $$P_j$$ P j using Eq. . Spectral entropy The spectral entropy (SE) of a signal is a measure of its spectral power distribution , . The SE treats the normalized power distribution of the signal in the frequency domain as a probability distribution and calculates its Shannon entropy. The Shannon entropy in this context is known as the spectral entropy of the signal. Given a time–frequency power spectrogram P ( t , f ), the probability distribution at time t , [12pt]{minimal} $$0 t T$$ 0 ≤ t ≤ T ; and frequency point m , [12pt]{minimal} $$m = 1, , N$$ m = 1 , ⋯ , N ; denoted as p ( t , m ), is 7 [12pt]{minimal} $$ p(t,m) = , $$ p ( t , m ) = P ( t , m ) ∑ f P ( t , f ) , where [12pt]{minimal} $$f [0, fs/2]$$ f ∈ [ 0 , f s / 2 ] is specified in this study, and fs is the sampling frequency. The spectral entropy at time t , denoted as H ( t ), is given as 8 [12pt]{minimal} $$ H(t) = - _{m=1}^N p(t,m) _2 p(t,m). $$ H ( t ) = - ∑ m = 1 N p ( t , m ) log 2 p ( t , m ) . Fuzzy recurrence plot In the study of dynamical systems, a sequence of values in time can be transformed into an object in space. This transformation allows the sequence to be analyzed in space. Such space is called the phase space. The object in the phase space is called the phase space set. The transformation of a sequence of values in time into an object in the phase space can be done using the time-delay embedding . The embedding dimension describes the space (such as a line, an area, or a volume) that contains the object . Time delay, which is also called lag, expresses the amount of offset in a time series. Mathematically, the phase-space reconstruction using time-delay embedding for a time series ( [12pt]{minimal} $$z_1, z_2, , z_I$$ z 1 , z 2 , ⋯ , z I ) can be performed as [12pt]{minimal} $${{ {y}}}_i = (z_i, z_{i+ }, , z_{i+(d-1) }$$ y i = ( z i , z i + ϕ , ⋯ , z i + ( d - 1 ) ϕ , [12pt]{minimal} $$i = 1, , I-(d-1)$$ i = 1 , ⋯ , I - ( d - 1 ) ϕ , where [12pt]{minimal} $$$$ ϕ and d are time delay and embedding dimension, respectively. In fuzzy logic , a fuzzy set is defined as a collection of distinct objects whose membership grades in the set are expressed with real numbers. In mathematic terms, let U be a universe of discourse and F a subset of U . The fuzzy set F is characterized by a fuzzy membership function [12pt]{minimal} $$ _F(x)$$ μ F ( x ) that maps each element [12pt]{minimal} $$x U$$ x ∈ U to the interval [0, 1]: [12pt]{minimal} $$ _F(x): U [0, 1]$$ μ F ( x ) : U → [ 0 , 1 ] . The real value of [12pt]{minimal} $$ _F(x)$$ μ F ( x ) is called the fuzzy membership grade of x in F . The notion of a fuzzy set can be expressed in the following three cases: 1) [12pt]{minimal} $$ _F(x) = 0$$ μ F ( x ) = 0 if x is not totally in F , 2) [12pt]{minimal} $$ _F(x) = 1$$ μ F ( x ) = 1 if x is totally in F , and 3) [12pt]{minimal} $$0< _F(x) < 1$$ 0 < μ F ( x ) < 1 if x is partially in F . Thus, the greater value of the fuzzy membership grade is, the more certain x is a member of F . In cluster analysis, data points can be assigned to different groups or clusters. Points that are most similar to each other belong to the same cluster. Based on the concept of fuzzy sets, fuzzy clustering assigns the data points to all clusters with different degrees of fuzzy membership. In other words, the fuzzy membership value of a data point for a certain cluster indicates how positive the data point belongs to that cluster. Now let [12pt]{minimal} $${{ {X}}} = ({ {x}}_1, , { {x}}_N) { {R}}^{Nm}$$ X = ( x 1 , ⋯ , x N ) ∈ R Nm with [12pt]{minimal} $${ {x}}_i { {R}}^m$$ x i ∈ R m be a phase-space collection of a signal transformed by the time-delay embedding method, c a pre-defined number of clusters, [12pt]{minimal} $${{ {V}}}=\{{ {v}}_1, , { {v}}_c\}$$ V = { v 1 , ⋯ , v c } a set of clusters, and [12pt]{minimal} $$ ({ {x}}_i,{ {v}}_q)$$ μ ( x i , v q ) , [12pt]{minimal} $$i=1, , N$$ i = 1 , ⋯ , N , [12pt]{minimal} $$q=1, , c$$ q = 1 , ⋯ , c , fuzzy membership grades expressing the degrees of phase-space vectors [12pt]{minimal} $${ {x}}_i$$ x i belonging to cluster centers [12pt]{minimal} $${ {v}}_q {{ {V}}}$$ v q ∈ V . These fuzzy membership grades can be determined using the fuzzy c -means algorithm . An FRP, denoted by [12pt]{minimal} $$}}$$ R ~ , is defined as 9 [12pt]{minimal} $$ }}(i,j) = ({ {x}}_i,{ {x}}_j), \, i, j = 1, , N, $$ R ~ ( i , j ) = μ ( x i , x j ) , i , j = 1 , ⋯ , N , where [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_j) [0, 1]$$ μ ( x i , x j ) ∈ [ 0 , 1 ] is the fuzzy membership of similarity between [12pt]{minimal} $${ {x}}_i$$ x i and [12pt]{minimal} $${ {x}}_j$$ x j . The elements of an FRP, [12pt]{minimal} $$}}(i,j)$$ R ~ ( i , j ) , [12pt]{minimal} $$i = 1, , N$$ i = 1 , ⋯ , N , [12pt]{minimal} $$j = 1, , N$$ j = 1 , ⋯ , N , can be inferred using three properties of fuzzy relations as follows. Reflexivity: 10 [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_i) = 1, \, i=1, , N. $$ μ ( x i , x i ) = 1 , i = 1 , ⋯ , N . Symmetry: 11 [12pt]{minimal} $$ ({ {x}}_i,{ {v}}_q) = ({ {v}}_q,{ {x}}_i), \, i = 1, , N, q = 1, , c. $$ μ ( x i , v q ) = μ ( v q , x i ) , i = 1 , ⋯ , N , q = 1 , ⋯ , c . Transitivity: 12 [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_j) = [ \{ ({ {x}}_i,{ {v}}_q), ({ {x}}_j,{ {v}}_q)\}], q = 1, , c. $$ μ ( x i , x j ) = max [ min { μ ( x i , v q ) , μ ( x j , v q ) } ] , q = 1 , ⋯ , c . As an example, to illustrate some difference in the visual display of an RP and an FRP, Fig. shows a time series of 2000 points of the X -component of the Lorenz (chaotic) system , and its RP and FRP. The RP was constructed using the embedding [12pt]{minimal} $$= 3$$ = 3 , time delay [12pt]{minimal} $$= 1$$ = 1 , and a conventional value for the similarity threshold [12pt]{minimal} $$= 5\%$$ = 5 % of the standard deviation of the signals. The FRP was constructed using the embedding [12pt]{minimal} $$= 3$$ = 3 , time delay [12pt]{minimal} $$= 1$$ = 1 , and number of clusters [12pt]{minimal} $$= 3$$ = 3 . The grayscale image of the FRP is much richer in texture than the binary image of the RP. Fuzzy recurrence image entropy Entropy of a grayscale image is a statistical measure of randomness to characterize the texture of the image. As an FRP is a grayscale image, the entropy of an FRP image is defined as 13 [12pt]{minimal} $$ E_{FRI} = - _{k=1}^{K} p_k _2 p_k, $$ E FRI = - ∑ k = 1 K p k log 2 p k , where [12pt]{minimal} $$K = 256$$ K = 256 , which is the number of gray levels of the FRP (obtained by converting real values of pixels in [0, 1] to integers in [0, 255]), and [12pt]{minimal} $$p_k$$ p k is the probability associated with the intensity level k , [12pt]{minimal} $$k = 1, , K$$ k = 1 , ⋯ , K , obtained from the normalized histogram for the k -th bin. Fuzzy recurrence entropy Based on the definition of the non-probabilistic entropy of a fuzzy set , the entropy of an [12pt]{minimal} $$N N$$ N × N FRP or fuzzy recurrence entropy that is a measure of the degree of uncertainty of recurrences of the reconstructed phase space of a signal is defined as 14 [12pt]{minimal} $$ E_{FR} = _{i=}^N _{j=1}^N - ({ {x}}_i,{ {x}}_j) \, _2 ({ {x}}_i,{ {x}}_j) - [1- ({ {x}}_i,{ {x}}_j)] \, _2[1- ({ {x}}_i,{ {x}}_j)], $$ E FR = ∑ i = N ∑ j = 1 N - μ ( x i , x j ) log 2 μ ( x i , x j ) - [ 1 - μ ( x i , x j ) ] log 2 [ 1 - μ ( x i , x j ) ] , where [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_j)$$ μ ( x i , x j ) corresponds to [12pt]{minimal} $$}}(i,j)$$ R ~ ( i , j ) defined in Eq. . The instantaneous frequency (IF) of a non-stationary signal is a time-varying parameter that relates to the average of the frequencies f present in the signal as it evolves over time instants t , . The IF function estimates the IF of a signal at a sampling rate by computing the spectrogram power spectrum P ( t , f ) and estimating the IF as 1 [12pt]{minimal} $$ IF(t) = ^{ } f P(t,f) df}{ _{- }^{ } P(t,f) df}. $$ I F ( t ) = ∫ - ∞ ∞ f P ( t , f ) d f ∫ - ∞ ∞ P ( t , f ) d f . The power spectrum is a mathematical expression of the amount of the signal at a frequency f . For a periodic signal, peaks at the fundamental frequency and its harmonics are observed at the spectrum; for a quasiperiodic signal, peaks at linear combinations of related frequencies observed; and a chaotic signal yields broad band components to the spectrum. In practice, the exact solution for the power spectrum cannot be determined because a signal x ( t ) is not infinitely long but measured over a finite interval [12pt]{minimal} $$0 t T$$ 0 ≤ t ≤ T . Therefore, the power spectrum needs to be numerically estimated. A method for estimating the power spectrum of a time series [12pt]{minimal} $$x_k$$ x k , [12pt]{minimal} $$k = 0, , N-1$$ k = 0 , ⋯ , N - 1 is described as follows. The spectral density of a time series of length N can be approximated as 2 [12pt]{minimal} $$ P_N(f) = | _{k=0}^{N-1} x_k e^{-i 2 fk t} |^2, $$ P N ( f ) = Δ t N ∑ k = 0 N - 1 x k e - i 2 π f k Δ t 2 , where [12pt]{minimal} $$ t$$ Δ t is the sampling interval. If the spectral value is calculated at [12pt]{minimal} $$f = j f$$ f = j Δ f , where [12pt]{minimal} $$ f = 1/(N t)$$ Δ f = 1 / ( N Δ t ) , and [12pt]{minimal} $$ t = 1$$ Δ t = 1 , then 3 [12pt]{minimal} $$ P_j = | _{k=0}^{N-1} x_k e^{-i 2 } |^2 = |X_j |^2, $$ P j = 1 N ∑ k = 0 N - 1 x k e - i 2 π jk N 2 = 1 N X j 2 , which indicates the discrete Fourier transform (DFT), [12pt]{minimal} $$X_j$$ X j , as 4 [12pt]{minimal} $$ X_j = _{k=0}^{N-1} x_k e^{-i 2 }, j = 0, , N-1. $$ X j = ∑ k = 0 N - 1 x k e - i 2 π jk N , j = 0 , ⋯ , N - 1 . However, it was proved that the power spectrum estimate expressed in Eq. is not properly scaled . Therefore, the estimate is modified as 5 [12pt]{minimal} $$ P_j= & {} | _{k=0}^{N-1} w_k x_k e^{-i 2 } |^2, j = 0, , N-1; $$ P j = 1 WN ∑ k = 0 N - 1 w k x k e - i 2 π jk N 2 , j = 0 , ⋯ , N - 1 ; in which 6 [12pt]{minimal} $$ W = _{j=0}^{N-1} w_j^2, $$ W = 1 N ∑ j = 0 N - 1 w j 2 , where [12pt]{minimal} $$w_j$$ w j , [12pt]{minimal} $$j= 0, , N-1$$ j = 0 , ⋯ , N - 1 , are the weights or coefficients of a window function (the Kaiser window is applied in this study). The estimate of [12pt]{minimal} $$P_j$$ P j expressed in Eq. using the fast Fourier transform (FFT) can be sequentially carried out as follows . Truncate the time series or pad with zeros so that [12pt]{minimal} $$N=2^n$$ N = 2 n , where n is a positive integer. Weight the time series with a window function. Calculate the DFT of the weighted time series [12pt]{minimal} $$(w_k x_k)$$ ( w k x k ) using the FFT. Calculate [12pt]{minimal} $$P_j$$ P j using Eq. . The spectral entropy (SE) of a signal is a measure of its spectral power distribution , . The SE treats the normalized power distribution of the signal in the frequency domain as a probability distribution and calculates its Shannon entropy. The Shannon entropy in this context is known as the spectral entropy of the signal. Given a time–frequency power spectrogram P ( t , f ), the probability distribution at time t , [12pt]{minimal} $$0 t T$$ 0 ≤ t ≤ T ; and frequency point m , [12pt]{minimal} $$m = 1, , N$$ m = 1 , ⋯ , N ; denoted as p ( t , m ), is 7 [12pt]{minimal} $$ p(t,m) = , $$ p ( t , m ) = P ( t , m ) ∑ f P ( t , f ) , where [12pt]{minimal} $$f [0, fs/2]$$ f ∈ [ 0 , f s / 2 ] is specified in this study, and fs is the sampling frequency. The spectral entropy at time t , denoted as H ( t ), is given as 8 [12pt]{minimal} $$ H(t) = - _{m=1}^N p(t,m) _2 p(t,m). $$ H ( t ) = - ∑ m = 1 N p ( t , m ) log 2 p ( t , m ) . In the study of dynamical systems, a sequence of values in time can be transformed into an object in space. This transformation allows the sequence to be analyzed in space. Such space is called the phase space. The object in the phase space is called the phase space set. The transformation of a sequence of values in time into an object in the phase space can be done using the time-delay embedding . The embedding dimension describes the space (such as a line, an area, or a volume) that contains the object . Time delay, which is also called lag, expresses the amount of offset in a time series. Mathematically, the phase-space reconstruction using time-delay embedding for a time series ( [12pt]{minimal} $$z_1, z_2, , z_I$$ z 1 , z 2 , ⋯ , z I ) can be performed as [12pt]{minimal} $${{ {y}}}_i = (z_i, z_{i+ }, , z_{i+(d-1) }$$ y i = ( z i , z i + ϕ , ⋯ , z i + ( d - 1 ) ϕ , [12pt]{minimal} $$i = 1, , I-(d-1)$$ i = 1 , ⋯ , I - ( d - 1 ) ϕ , where [12pt]{minimal} $$$$ ϕ and d are time delay and embedding dimension, respectively. In fuzzy logic , a fuzzy set is defined as a collection of distinct objects whose membership grades in the set are expressed with real numbers. In mathematic terms, let U be a universe of discourse and F a subset of U . The fuzzy set F is characterized by a fuzzy membership function [12pt]{minimal} $$ _F(x)$$ μ F ( x ) that maps each element [12pt]{minimal} $$x U$$ x ∈ U to the interval [0, 1]: [12pt]{minimal} $$ _F(x): U [0, 1]$$ μ F ( x ) : U → [ 0 , 1 ] . The real value of [12pt]{minimal} $$ _F(x)$$ μ F ( x ) is called the fuzzy membership grade of x in F . The notion of a fuzzy set can be expressed in the following three cases: 1) [12pt]{minimal} $$ _F(x) = 0$$ μ F ( x ) = 0 if x is not totally in F , 2) [12pt]{minimal} $$ _F(x) = 1$$ μ F ( x ) = 1 if x is totally in F , and 3) [12pt]{minimal} $$0< _F(x) < 1$$ 0 < μ F ( x ) < 1 if x is partially in F . Thus, the greater value of the fuzzy membership grade is, the more certain x is a member of F . In cluster analysis, data points can be assigned to different groups or clusters. Points that are most similar to each other belong to the same cluster. Based on the concept of fuzzy sets, fuzzy clustering assigns the data points to all clusters with different degrees of fuzzy membership. In other words, the fuzzy membership value of a data point for a certain cluster indicates how positive the data point belongs to that cluster. Now let [12pt]{minimal} $${{ {X}}} = ({ {x}}_1, , { {x}}_N) { {R}}^{Nm}$$ X = ( x 1 , ⋯ , x N ) ∈ R Nm with [12pt]{minimal} $${ {x}}_i { {R}}^m$$ x i ∈ R m be a phase-space collection of a signal transformed by the time-delay embedding method, c a pre-defined number of clusters, [12pt]{minimal} $${{ {V}}}=\{{ {v}}_1, , { {v}}_c\}$$ V = { v 1 , ⋯ , v c } a set of clusters, and [12pt]{minimal} $$ ({ {x}}_i,{ {v}}_q)$$ μ ( x i , v q ) , [12pt]{minimal} $$i=1, , N$$ i = 1 , ⋯ , N , [12pt]{minimal} $$q=1, , c$$ q = 1 , ⋯ , c , fuzzy membership grades expressing the degrees of phase-space vectors [12pt]{minimal} $${ {x}}_i$$ x i belonging to cluster centers [12pt]{minimal} $${ {v}}_q {{ {V}}}$$ v q ∈ V . These fuzzy membership grades can be determined using the fuzzy c -means algorithm . An FRP, denoted by [12pt]{minimal} $$}}$$ R ~ , is defined as 9 [12pt]{minimal} $$ }}(i,j) = ({ {x}}_i,{ {x}}_j), \, i, j = 1, , N, $$ R ~ ( i , j ) = μ ( x i , x j ) , i , j = 1 , ⋯ , N , where [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_j) [0, 1]$$ μ ( x i , x j ) ∈ [ 0 , 1 ] is the fuzzy membership of similarity between [12pt]{minimal} $${ {x}}_i$$ x i and [12pt]{minimal} $${ {x}}_j$$ x j . The elements of an FRP, [12pt]{minimal} $$}}(i,j)$$ R ~ ( i , j ) , [12pt]{minimal} $$i = 1, , N$$ i = 1 , ⋯ , N , [12pt]{minimal} $$j = 1, , N$$ j = 1 , ⋯ , N , can be inferred using three properties of fuzzy relations as follows. Reflexivity: 10 [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_i) = 1, \, i=1, , N. $$ μ ( x i , x i ) = 1 , i = 1 , ⋯ , N . Symmetry: 11 [12pt]{minimal} $$ ({ {x}}_i,{ {v}}_q) = ({ {v}}_q,{ {x}}_i), \, i = 1, , N, q = 1, , c. $$ μ ( x i , v q ) = μ ( v q , x i ) , i = 1 , ⋯ , N , q = 1 , ⋯ , c . Transitivity: 12 [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_j) = [ \{ ({ {x}}_i,{ {v}}_q), ({ {x}}_j,{ {v}}_q)\}], q = 1, , c. $$ μ ( x i , x j ) = max [ min { μ ( x i , v q ) , μ ( x j , v q ) } ] , q = 1 , ⋯ , c . As an example, to illustrate some difference in the visual display of an RP and an FRP, Fig. shows a time series of 2000 points of the X -component of the Lorenz (chaotic) system , and its RP and FRP. The RP was constructed using the embedding [12pt]{minimal} $$= 3$$ = 3 , time delay [12pt]{minimal} $$= 1$$ = 1 , and a conventional value for the similarity threshold [12pt]{minimal} $$= 5\%$$ = 5 % of the standard deviation of the signals. The FRP was constructed using the embedding [12pt]{minimal} $$= 3$$ = 3 , time delay [12pt]{minimal} $$= 1$$ = 1 , and number of clusters [12pt]{minimal} $$= 3$$ = 3 . The grayscale image of the FRP is much richer in texture than the binary image of the RP. Entropy of a grayscale image is a statistical measure of randomness to characterize the texture of the image. As an FRP is a grayscale image, the entropy of an FRP image is defined as 13 [12pt]{minimal} $$ E_{FRI} = - _{k=1}^{K} p_k _2 p_k, $$ E FRI = - ∑ k = 1 K p k log 2 p k , where [12pt]{minimal} $$K = 256$$ K = 256 , which is the number of gray levels of the FRP (obtained by converting real values of pixels in [0, 1] to integers in [0, 255]), and [12pt]{minimal} $$p_k$$ p k is the probability associated with the intensity level k , [12pt]{minimal} $$k = 1, , K$$ k = 1 , ⋯ , K , obtained from the normalized histogram for the k -th bin. Based on the definition of the non-probabilistic entropy of a fuzzy set , the entropy of an [12pt]{minimal} $$N N$$ N × N FRP or fuzzy recurrence entropy that is a measure of the degree of uncertainty of recurrences of the reconstructed phase space of a signal is defined as 14 [12pt]{minimal} $$ E_{FR} = _{i=}^N _{j=1}^N - ({ {x}}_i,{ {x}}_j) \, _2 ({ {x}}_i,{ {x}}_j) - [1- ({ {x}}_i,{ {x}}_j)] \, _2[1- ({ {x}}_i,{ {x}}_j)], $$ E FR = ∑ i = N ∑ j = 1 N - μ ( x i , x j ) log 2 μ ( x i , x j ) - [ 1 - μ ( x i , x j ) ] log 2 [ 1 - μ ( x i , x j ) ] , where [12pt]{minimal} $$ ({ {x}}_i,{ {x}}_j)$$ μ ( x i , x j ) corresponds to [12pt]{minimal} $$}}(i,j)$$ R ~ ( i , j ) defined in Eq. . Based on LSTM networks , , , in which the proposed input time–frequency (TF) and time–space (TS) features are included, the architecture for a TF–TS LSTM block is graphically described in Fig. . This figure illustrates the flow of an input time series [12pt]{minimal} $${{ {u}}} = ({{ {u}}_1}, , {{ {u}}_M}) { {R}}^{MQ}$$ u = ( u 1 , ⋯ , u M ) ∈ R MQ through an LSTM layer, where M is the number of segments split from the original time series of length L , and Q the number of features. In this study, [12pt]{minimal} $$M = L/N $$ M = ⌈ L / N ⌉ , where [12pt]{minimal} $$N=128$$ N = 128 , [12pt]{minimal} $$ $$ ⌈ ⌉ denotes the ceiling function, and [12pt]{minimal} $$Q=4$$ Q = 4 . The input at a time point is the concatenation of the four features extracted for the segment at the same time point, i.e., [12pt]{minimal} $${{ {u}}}_ = (F_{ 1}, F_{ 2}, F_{ 3}, F_{ 4})^T$$ u τ = ( F τ 1 , F τ 2 , F τ 3 , F τ 4 ) T , [12pt]{minimal} $$ = 1, , M$$ τ = 1 , ⋯ , M , where [12pt]{minimal} $$F_{ 1}$$ F τ 1 , [12pt]{minimal} $$F_{ 2}$$ F τ 2 , [12pt]{minimal} $$F_{ 3}$$ F τ 3 , and [12pt]{minimal} $$F_{ 4}$$ F τ 4 are the instantaneous frequency, spectral entropy, fuzzy recurrence image entropy, and fuzzy recurrence entropy extracted from segment [12pt]{minimal} $${{ {u}}}_$$ u τ , respectively. The learnable weights of an LSTM layer are the input weights, denoted as [12pt]{minimal} $${{ {a}}}$$ a , recurrent weights, denoted as [12pt]{minimal} $${{ {r}}}$$ r , and bias, denoted as b . The matrices [12pt]{minimal} $${{ {A}}}$$ A , [12pt]{minimal} $${{ {R}}}$$ R , and vector [12pt]{minimal} $${{ {b}}}$$ b are the concatenations of the input weights, recurrent weights, and bias of each component, respectively. The concatenations are expressed as 15 [12pt]{minimal} $$ {{ {A}}} \, = \, [{{ {a}}}_{i}, {{ {a}}}_{f}, {{ {a}}}_{g}, {{ {a}}}_{o}]^T, $$ A = [ a i , a f , a g , a o ] T , 16 [12pt]{minimal} $$ {{ {R}}} \,= \, [{{ {r}}}_{i}, {{ {r}}}_{f}, {{ {r}}}_{g}, {{ {r}}}_{o}]^T, $$ R = [ r i , r f , r g , r o ] T , 17 [12pt]{minimal} $$ {{ {b}}} \,= \, [{b}_{i}, {b}_{f}, {b}_{g}, {b}_{o}]^T, $$ b = [ b i , b f , b g , b o ] T , where i , f , g , and o denote the input gate, forget gate, cell candidate, and output gate, respectively. The cell state at time step [12pt]{minimal} $$$$ τ is defined as 18 [12pt]{minimal} $$ {{ {c}}}_ = f_ {{ {c}}}_{ -1} + i_ g_ , $$ c τ = f τ ∘ c τ - 1 + i τ ∘ g τ , where [12pt]{minimal} $$$$ ∘ is the Hadamard product. The hidden state at time step [12pt]{minimal} $$$$ τ is given by 19 [12pt]{minimal} $$ {{ {h}}}_ = o_ _c({{ {c}}}_{ }), $$ h τ = o τ ∘ σ c ( c τ ) , where [12pt]{minimal} $$ _c$$ σ c is the state activation function that is usually computed as the hyperbolic tangent function (tanh). At time step [12pt]{minimal} $$$$ τ , the input gate ( [12pt]{minimal} $$i_$$ i τ ), forget gate ( [12pt]{minimal} $$f_$$ f τ ), cell candidate ( [12pt]{minimal} $$g_$$ g τ ), and output gate ( [12pt]{minimal} $$o_$$ o τ ) are defined as 20 [12pt]{minimal} $$ i_ \,=\, _g ({{ {a}}}_i {{ {u}}}_ + {{ {r}}}_i {{ {h}}}_{ -1} + {b}_i), $$ i τ = σ g ( a i u τ + r i h τ - 1 + b i ) , 21 [12pt]{minimal} $$ f_ \,=\, _g ({{ {a}}}_f {{ {u}}}_ + {{ {r}}}_f {{ {h}}}_{ -1} + {b}_f), $$ f τ = σ g ( a f u τ + r f h τ - 1 + b f ) , 22 [12pt]{minimal} $$ g_ \,=\, _c ({{ {a}}}_g {{ {u}}}_ + {{ {r}}}_g {{ {h}}}_{ -1} + {b}_g), $$ g τ = σ c ( a g u τ + r g h τ - 1 + b g ) , 23 [12pt]{minimal} $$ o_\,=\, _g ({{ {a}}}_o {{ {u}}}_ + {{ {r}}}_o {{ {h}}}_{ -1} + {b}_o), $$ o τ = σ g ( a o u τ + r o h τ - 1 + b o ) , where [12pt]{minimal} $$ _g$$ σ g denotes the gate activation function that usually adopts the sigmoid function. A bidirectional LSTM (bi-LSTM) is an extension of traditional LSTM that can improve performance on sequence classification problems. Instead of being trained with one LSTM on the input time series, a bi-LSTM architecture is trained with both time directions simultaneously with hidden forward and backward layers. The first on the input time series as it is and the second on a reversed copy of the time series. This architecture learns bidirectional long-term dependencies between time steps of time series and therefore can provide additional context to the network and result in fuller learning on the data. The procedures for obtaining data balance for training and testing sets, and the transformation of raw time series into TF and TS features for LSTM learning and classification are outlined in Fig. . To obtain signals of the same length contained in both training and testing datasets, the histogram of the distribution of the lengths of the signals is observed to detect the majority length. Signals of lengths that are less than the majority are discarded, and those that are longer than the majority are split into segments of the majority length and the remaining samples of the signal are ignored if there are any. Creating signals of equal length is particularly useful for the training of the networks that breaks the data into mini-batches. In the same mini-batch, the training pads or truncates the signals to have the same length. However, it is known that the process of padding or truncating can reduce the performance of the networks because of the added or missed information caused by the padding or truncating, respectively. To obtain the data balance in each class for both training and testing, copies of the signals of the minority class are repeated to achieve the same size of the signals of the majority class. This step is described in Fig. a. The next step is to extract the TF features of the signals using the instantaneous frequency and spectral entropy and the TS features of the signals using the fuzzy recurrence image entropy and fuzzy recurrence entropy for training the networks (Fig. b). The same TF and TS features are extracted from the testing signals as the input for the trained TF–TS LSTM networks to carry out the classification task (Fig. c). Let condition positive P be the total number of disease signals, condition negative N the total number of healthy control signals, true positive TP the number of disease signals correctly identified as disease, false positive FP the number of healthy control signals incorrectly identified as disease, true negative TN the number of healthy control signals correctly identified as healthy control, and false negative FN the number of the disease signals incorrectly identified as healthy control. Accuracy ( ACC ) is defined as 24 [12pt]{minimal} $$ ACC = . $$ A C C = T P + T N P + N . Sensitivity ( SEN ) is defined in this study as the portion of the disease signals that are correctly identified as having the condition: 25 [12pt]{minimal} $$ SEN = . $$ S E N = TP P . Specificity ( SPE ) is the portion of the healthy control signals that are correctly identified as not having the disease: 26 [12pt]{minimal} $$ SPE = . $$ S P E = TN N . Precision ( PRE ) is calculated as 27 [12pt]{minimal} $$ PRE = . $$ P R E = TP T P + F P . [12pt]{minimal} $$F_1$$ F 1 score is the harmonic mean of precision and sensitivity and calculated as 28 [12pt]{minimal} $$ F_1 = . $$ F 1 = 2 T P 2 T P + F P + F N . Tables and list the tenfold cross-validation results of two physiological databases: ECG, and Gait in Parkinson’s Disease, respectively. For the ECG database, this experiment used normal sinus rhythm (5050 signals) and AF (738 signals) for binary classification. For the Gait in Parkinson’s Disease data, this study used the time series recorded from only one sensor under the left foot labeled as L5 on the database. The purpose of selecting the sensor data recorded at the L5 location was to compare with the work reported in , which used four sensors at L5, L7, R7, and R8 for the classification of gait patterns. The LSTM used in this study was the bi-LSTM (LSTM will be used as bi-LSTM subsequently). To extract the TF features, sampling frequency was set as 300 Hz. To extract the TS features, the embedding dimension [12pt]{minimal} $$= 1$$ = 1 , time delay [12pt]{minimal} $$= 1$$ = 1 , and number of clusters [12pt]{minimal} $$= 3$$ = 3 for computing the FRPs. The specifications of the FRP parameters were based on previous studies , , which provided satisfactorily results and were not as sensitive for constructing FRPs as for RPs . All TF and TS features were standardized to improve the network training and testing . For the LSTM specifications, the network layer with an output size [12pt]{minimal} $$= 100$$ = 100 , fully connected layer [12pt]{minimal} $$= 2$$ = 2 (two classes), followed by a softmax layer and a classification layer. Training options of the bi-LSTM were set as optimizer [12pt]{minimal} $$=$$ = ‘Adam’ (adaptive moment estimation), including [12pt]{minimal} $$L_2$$ L 2 regularization factor, maximum number of epochs [12pt]{minimal} $$= 80$$ = 80 , minimum batch size [12pt]{minimal} $$= 150$$ = 150 , initial learning rate [12pt]{minimal} $$= 0.01$$ = 0.01 , and gradient threshold [12pt]{minimal} $$= 1$$ = 1 . For the ECG data, the TF–TS LSTM significantly outperformed conventional LSTM in terms of classification accuracy (58% and 94% for conventional LSTM and TF–TS LSTM, respectively), other statistical measures (sensitivity, specificity, precision, and [12pt]{minimal} $$F_1$$ F 1 score), and training time (3506 minutes and 1 minute for LSTM and TF–TS LSTM, respectively, where the time for computing the four features was excluded in the TF–TS LSTM training). The specificity (34%) is much lower than the sensitivity (83%) obtained from the conventional LSTM, while these two measures are much more balanced using the TF–TS LSTM (sensitivity [12pt]{minimal} $$= 91\%$$ = 91 % and specificity [12pt]{minimal} $$= 96\%$$ = 96 % ). For the gait data, using the signals recorded from only one sensor, TF–TS LSTM provided perfect classification metrics (accuracy [12pt]{minimal} $$= 100\%$$ = 100 % , sensitivity [12pt]{minimal} $$= 100\%$$ = 100 % , specificity [12pt]{minimal} $$= 100\%$$ = 100 % , precision [12pt]{minimal} $$= 100\%$$ = 100 % , and [12pt]{minimal} $$F_1$$ F 1 score [12pt]{minimal} $$= 1$$ = 1 ) with the training time of [12pt]{minimal} $$< 1$$ < 1 minute (the time for computing the four features was excluded). The use of conventional LSTM yielded the accuracy [12pt]{minimal} $$= 79\%$$ = 79 % with 111 minutes for data training. Other five previous methods – , that studied the same database used the number of sensors between 4 and 16 obtained accuracy rates between 77% and 98% (standard deviations of classification results obtained from these five methods were not given in literature ). Computer experiments have shown that TF–FS LSTM achieved very high performance in the classification task and saved tremendous training time in comparison with the conventional implementation of the conventional LSTM. As an example, Fig. shows the contrast of the training processes of conventional LSTM and TF–TS LSTM with respect to the convergence of accuracy and the number of iterations. Not only the TF–TS LSTM outperformed conventional LSTM, classification results of gait in Parkinson’s disease in terms of accuracy, sensitivity, specificity, precision, and [12pt]{minimal} $$F_1$$ F 1 score obtained from the TF–TS LSTM are higher than those previously reported in literature. In particular, the TF–TS LSTM used the data recorded from only one sensor. The significant reduction in biomedical sensors to measure human physiological parameters in real time for disease detection has an implication for promising the user’s comfort and contributing to the low cost, simplicity, and portability in wearable sensor technology. In this study, only the gait data recorded by one sensor located at L5 were used to compare with the other work that included the data recorded by four sensors located at L5, L7, R7, and R8. The gait classification with the use of a single sensor located at L5 obtained from the proposed TF–TS LSTM outperformed the use of the four sensors for the classification obtained from the methods of phase space reconstruction, empirical mode decomposition, and neural networks . Tests of the TF–TS LSTM for the gait classification using data recorded from other single sensors were not carried out. However, the current comparison has shown the better performance of the TF–TS LSTM. As the five methods – , , which were compared with the TF–TS LSTM using the gait data, were proposed and implemented by other authors, it would be difficult to fairly implement these methods for the classification of the ECG data without the provision of the source codes. However, it is shown that the test results obtained from the TF–TS LSTM are significantly higher than the LSTM using the two datasets, and the classification accuracy obtained from the LSTM using the gait data from only one sensor (79%) is higher than the result reported in using the gait data from 8 sensors (77%). Here the signal lengths were made to be the same length of the majority. In case, if the majority does not exist or the histogram has a uniform distribution, the signal lengths can be made to be equal to the length of the shortest signal. In general, signals of lengths that are shorter than the majority can be included for the classification. However, it has been mentioned earlier, creating signals of equal length can be more effective for the network training and testing. In practice, the recording of physiological signals that meet some standard length for testing is feasible because it is based on the majority. As shown in Fig. , the high accuracy of the TF–TS LSTM training could be reached while the training of the LSTM with raw time series could not improve much in accuracy. Furthermore, the TF–TS LSTM requires much shorter time for training in comparison with the training of raw long time series. This is because it is trained with sequential features of the time series instead of the time series, where the length of the features is much shorter than that of the original data and the effectiveness of the standardized features is an important factor for improving the network performance during training. Feature extraction can be related to dimensionality reduction by which multivariate data can be reduced to lower-dimensional space for more manageable data processing. The physiological time series used in this study are one-dimensional time series. On the contrary, these time series were split into equal segments from which the four features were extracted for learning and classification by the TF–TS LSTM. In other words, the one-dimensional time series were transformed into much shorter sequences of 4 feature dimensions as shown in Fig. . The extracted features provide essential information of the data in time–frequency and time–space domains, which are intended to be complementary, informative, and non-redundant responses. Thus, the transformed data can facilitate the subsequent learning and leverage discriminative power of the sequential deep learning, leading to better class predictions. The results obtained in this study have shown the TF–TS LSTM outperformed other statistical classifiers, including SVMs and multilayer perceptron. In summary, the finding is that training the LSTM network with raw time series produce poor classification results but training the network with TF and TS features extracted from the signals can both significantly enhance the classification performance and reduce the training time. The Matlab-based TF–TS LSTM software for classification of physiological signals is designed to be easily utilized by biomedical and life science users who do not have technical knowledge in AI, signal processing, and general physics by following provided step-by-step instructions (Supplementary Note). In biomedical data, the problem of data imbalance is common, which can significantly prevent classifiers from achieving good results. The software suggests how to design a balance of class samples for training and testing datasets when minority classes exist. An AI-based approach for improving the performance in detecting diseases using physiological signals have been presented and discussed. The proposed method takes advantages of information extracted from both frequency and space out of the temporal data for effective deep learning to increase the classification task and lower computational complexity. Although the method was developed for classifying time series in physiology, it can be readily applied to the classification of other biological and clinical signals, such as time series in gene expression , neurology , and epidemiology . The AI-based method presented in this work was tested using the records obtained from a single-sensor measurement of gait in PD. The results suggest the method has potential to be able to reduce the need of using multiple sensors for recording physiological data, thus resulting in both cost-saving and comfort to the participants. Further tests of the method with other multiple-sensor data would be necessary to confirm the finding. Wearable sensors are useful devices for evaluating patient outcomes in clinical trials. However, the devices need to provide physical ease to participants so that they are prepared to wear them. Otherwise, the deployment of such tools will not be practically feasible, particularly when applying to the older adult ( [12pt]{minimal} $$> 50 \, {years}$$ > 50 years ) population . MATLAB software, ECG data for AF and normal sinus rhythm, and Supplementary Note for running the ECG data used in this paper are publicly available at the author’s personal homepage: https://sites.google.com/view/tuan-d-pham/codes under the title “TF–TS LSTM”. Supplementary Information.
Fabrication of polyvinyl alcohol based fast dissolving oral strips of sumatriptan succinate and metoclopramide HCL
e6da6f80-3bc4-4f6f-af4e-730d4bd8fafb
10358599
Pharmacology[mh]
Migraine is a disorder of recurrent headache involving trigeminovascular system and the cerebral cortex. – Duration of headache attacks may exceed from 4 h to as long as 72 h. It is characterized by some associated symptoms like stomach revulsions, photalgia. Serotonin with its receptors 5-HT 1B and 5-HT 1D are strongly believed to play its role in the pathophysiology of migraine. Surprisingly, autonomic nervous system also plays an important role in migraine. Decrease level of norepinephrine activates sympathetic system and parasympathetic baro-reflex response linked with migraine. Migraine drastically affects the daily life due to intolerable pain, nausea and vomiting. Treatment options for migraine are divided into two, prophylactically NSAID, β-blockers, serotonin antagonists, valproate, and selective serotonin reuptake inhibitors (SSRIs), etc. are recommended, while for treatment therapy ergot derivatives and triptans are preferably used. Ergot derivatives and triptans are used due to their significant vaso-constrictive potentials. Triptans are 5-HT 1B and 5-HT 1D agonists. Vasoconstriction of extra-cranial blood vessels and inhibition of neuropeptides release make sumatriptan as an effective treatment option for migraine. Metochlopermide is a prokinetic agent used to treat nausea and vomiting, GIT motility disorders, and gastro-esophageal reflux disease. There are different routes available for drug administration, but oral route is the most preferable. Surface area of the oral cavity is sufficient for rapid disintegration, dissolution, and absorption of fast dissolving films. Highly vascularized mucosal lining makes it an appropriate route for the administration of drugs that provides a quick response. Conventional dosage forms of sumatriptan and metoclopramide like tablets, capsules, and injections have also been used to control pain threshold and other symptoms. Insufficient dose and frequent dosing result in patient non-compliance. Therefore, fabrication of FDOS is an ideal candidate to fulfill all basic requirements. The film is preferable over fast dissolving tablets due to its delicacy; flexibility while tablets are brittle, fragile and need to be disintegrated before their dissolution. The purpose of the study was to target the migraine and its associated symptoms of nausea and vomiting by formulating FDOSs having the potential to deliver both drugs in combination through buccal route and to bypass stomach that cause therapy failure. Candidate drugs were Sumatriptan succinate (generously gifted by ATCO laboratories Pvt. Ltd, Karachi, Pakistan) and Metoclopramide HCl (gifted by Unexo laboratories, Lahore, Pvt. Ltd.) used as anti-migraine and anti-emetic, respectively. PVA (film former was purchased from Merck, Darmstadt, Germany), Glycerol (plasticizer was obtained from Merck, Darmstadt, Germany), Ethanol (permeation enhancer), PVP K30 (channeling agent was purchased from Sigma-Aldrich GmbhChemie, Germany), Saccharin Sodium (sweetener), and Menthol (flavoring agent) were purchased from Pulcra Chemicals Shanghai China. Distilled water was freshly prepared in the research laboratory of the faculty of Pharmacy, The University of Lahore. Preparation of fast dissolving oral strips Stock solutions of each ingredient were prepared for the formation of strips, except for saccharine sodium. Concentrations of these solutions were 100 mg PVA/mL of water, 100 mg glycerol/mL of water, 20 mg PVP K30/mL of water, 25 mg SS/mL of water, 5 mg MH/mL of water and 10 mg menthol/mL of ethanol. Plasticizer-polymer ratio has an integral role in film formulation, so to obtain suitable formulation, three different concentrations of plasticizer including 5%, 7.5%, and 10% were used. Similarly, for particular glycerol concentration, three concentrations of polymer, including 100, 125, and 150 mg were used. Total nine formulations were developed for co-delivery of both as per compositions provided in . Oral strips were prepared by mixing the respective solutions of each ingredient and added sodium saccharin accordingly. All the solutions were mixed in a beaker on hot plate magnetic stirrer then homogenous mixture was poured into pre-dried petri dishes. Casted films were then hot air dried at the 40°C for 24 h having inverted funnels on them. After drying, they were peeled off cautiously with the help of a sharp knife, packed in aluminum foil separately and stored in the desiccator for further evaluation . Constant quantities of Sumatriptan Succinate (SS = 1 mL), Metoclopramide HCl (MH = 1 mL), PVP K30 solution (0.5 mL), menthol solution (1 mL), and Saccharine sodium 10 mg were used Light microscopy Light microscopy of films was done to study the surface morphology of the films at micro level. Optika microscope 4083B3, Italy was used for this purpose. A small portion of each strip was cut and placed over the glass slide for observation at 40× power lens, under optical microscope. Fourier transform infrared analysis Fourier transform infrared (FTIR) spectroscopy was performed in order to check the compatibility of formulation ingredients. IR spectra of neat ingredients and optimized formulation were recorded. Samples were scanned over the wave number range of 4000 to 400 cm −1 at ambient temperature. Thickness The thickness test was carried out by using the digital vernier caliper. Zero error was checked and then single strip selected from each prepared batch was subjected to thickness measurement at three different places by placing strips between jaws of a vernier caliper. Readings were measured and recorded as a mean and standard deviation. Tack test Property of adhesion to a surface is called tack. For evaluating tack, a piece of paper was pressed between two strips. Results were recorded as tack-free, slightly tacky, tacky, and very tacky according to the adhesion of the strips to the paper. Tensile testing Tensile testing is the primary test for the evaluation of mechanical strength of the strips. Tensile testing was performed on TIRA test 2810 E6 Universal Testing Machine, Germany, equipped with 10 kN load cell, using TIRA test software. Strips of 10 mm width and 80 mm length were cut using a specimen cutter to prevent imperfections along the length and edges. About 50 mm initial grip separation was used and test was performed using 50 mm per minute crosshead speed. For strong gripping a thin polystyrene thermoform sheet was placed on both sides of the film strip, to avoid slippage. Folding fortitude It is usually called the Folding Endurance and is the resistance of a strip to break upon repeated foldings at a single point. It is stated as the number of times, a strip can be folded at the same point until a fracture occurred at that point. Strip was held in two hands by thumb and index finger of each hand and then it was folded at the same point repeatedly. Number of times taken by the strip to break were reported as folding fortitude (FF). Weight uniformity Weight uniformity is very important in the dose uniformity of formulations. The usual dose is determined per calculated surface area in the case of strips. For evaluation of the weight, six strips of each formulation were weighed individually and accurately on the electronic weighing balance, previously set to zero. Calculated mean weight and standard deviation of each formulation. PH determination FDOSs are made to dissolve in the oral cavity, so their pH should lie in the range of the buccal cavity (5.5–7.4). One strip was selected from each batch, for pH measurement it was first dissolved in 2 mL of distilled water. pH was measured by 25CW microprocessor benchtop pH/mV meter, BANTE instruments, China. The electrode of the pH meter was brought to the contact with strip solution and waited for 10 min to stabilize the pH reading then pH was noted. Percent moisture content Percent moisture contents (%MC) were determined by drying the strips till constant weight. Hot air oven was used for drying the films. Two films from each batch were taken and their weight was noted, then these were kept in hot air oven at 50°C for 24 h (so that constant weight can be achieved). Percent moisture contents or percent moisture loss was calculated from the equation (3.5). % Moisture Contents = W o − Wd Wo x 100 Where; Wo is initial weight and Wd is dried weight of the strip. In-vitro disintegration time and total dissolution time FDOSs show rapid disintegration as less as not more than one minute. Distilled water (10 mL) was poured in a petri dish that was previously heated to 37°C. About 1 × 1 cm 2 area was cut from a strip. This strip was then carefully placed on the surface of water in floating position without any sinking and sticking of strip in water and with walls, respectively. The stopwatch was turned on while placing the strip over water. Petri dish was slightly shaked and time consumed for complete disintegration was noted. Time was counted for three cut portions of a strip from each batch. Data analysis and numerical Optimization Disintegration time (DT) and total dissolution time (TDT) were the studied responses. A polynomial equation with interaction and quadratic factors was developed by Multiple Linear Regression Analysis (MLRA) approach to study the selected responses. The mathematical expression of the MLRA model is described in equation , ; Y = β 0 + β 1 X 1 + β 2 X 2 + β 12 X 1 X 2 - β 1 2 X 1 2 - β 2 2 X 2 2 Where, βo was the intercept, demonstrating the mathematical mean of all numerical outcomes of 13 trials, β 1 and β 2 were the coefficients, which were calculated from the observed experimental values of Y, and X 1 and X 2 were coded levels of the independent variable. The terms X 1 X 2 and Xi (i = 1 to 2) symbolize the interaction and quadratic terms of the studied model, respectively. Analysis of variance (ANOVA) was used to validate the experimental outcomes, which were described in the form of polynomial equations. Various combinations were tried to find out optimized formulation for the delivery of MLX in the form of rapidly dissolving buccal films. The 3-D response surface methodological graphs (RSM) and 2-D contour plots were created using the output files generated by the software to describe the effects of hydrophilic polymer and plasticizer on studied parameters. , Two responses, including DT and TDT were also analyzed statistically through design expert ver. 7.0 by the application of ANOVA. Drug contents Drug contents in strips were estimated by the standard assay method. For content uniformity individual film was dissolved in 250 mL media and 2 mL from this solution was diluted to 10 mL (S1) and from S1 2 mL was diluted further to 10 mL (S2). Absorbance of S1was taken at 315 nm for MH and that of S2 was taken at 226 nm for SS then % drug contents were calculated by comparing them with the MH standard dilution and SS standard dilution, respectively. In-vitro drug release In vitro drug release was determined by using the magnetic stirrer method. Distilled water was used as dissolution media in this method. 250 mL beaker was placed on a preheated hot plate magnetic stirrer, set at 37°C. Dissolution media (250 mL) was heated to 37°C in a separate beaker. Then oral strip containing one dose was attached to the inner moistened wall of the beaker. Dissolution media maintained at 37°C was immediately poured into the beaker, RPM was set to 500 and stopwatch was turned on, all three steps at the same time. As the quantity of the dissolution media was reduced to half so, the quantity of the aliquots was also reduced to half. About 2 mL was drawn at every time point and was diluted to 10 mL for analyzing MH and 2 mL from this 10 mL was taken and diluted to 10 mL for SS. Kinetic analysis Kinetic models were applied to release model in order to find out the best fit model and mechanism of release. Release data of all the formulations was evaluated for Highuchi, zero order, first order, Hixon Crowell, etc. by using DD solver ® , that is, a Microsoft ® Excel based adds in program. Stock solutions of each ingredient were prepared for the formation of strips, except for saccharine sodium. Concentrations of these solutions were 100 mg PVA/mL of water, 100 mg glycerol/mL of water, 20 mg PVP K30/mL of water, 25 mg SS/mL of water, 5 mg MH/mL of water and 10 mg menthol/mL of ethanol. Plasticizer-polymer ratio has an integral role in film formulation, so to obtain suitable formulation, three different concentrations of plasticizer including 5%, 7.5%, and 10% were used. Similarly, for particular glycerol concentration, three concentrations of polymer, including 100, 125, and 150 mg were used. Total nine formulations were developed for co-delivery of both as per compositions provided in . Oral strips were prepared by mixing the respective solutions of each ingredient and added sodium saccharin accordingly. All the solutions were mixed in a beaker on hot plate magnetic stirrer then homogenous mixture was poured into pre-dried petri dishes. Casted films were then hot air dried at the 40°C for 24 h having inverted funnels on them. After drying, they were peeled off cautiously with the help of a sharp knife, packed in aluminum foil separately and stored in the desiccator for further evaluation . Constant quantities of Sumatriptan Succinate (SS = 1 mL), Metoclopramide HCl (MH = 1 mL), PVP K30 solution (0.5 mL), menthol solution (1 mL), and Saccharine sodium 10 mg were used Light microscopy of films was done to study the surface morphology of the films at micro level. Optika microscope 4083B3, Italy was used for this purpose. A small portion of each strip was cut and placed over the glass slide for observation at 40× power lens, under optical microscope. Fourier transform infrared (FTIR) spectroscopy was performed in order to check the compatibility of formulation ingredients. IR spectra of neat ingredients and optimized formulation were recorded. Samples were scanned over the wave number range of 4000 to 400 cm −1 at ambient temperature. The thickness test was carried out by using the digital vernier caliper. Zero error was checked and then single strip selected from each prepared batch was subjected to thickness measurement at three different places by placing strips between jaws of a vernier caliper. Readings were measured and recorded as a mean and standard deviation. Property of adhesion to a surface is called tack. For evaluating tack, a piece of paper was pressed between two strips. Results were recorded as tack-free, slightly tacky, tacky, and very tacky according to the adhesion of the strips to the paper. Tensile testing is the primary test for the evaluation of mechanical strength of the strips. Tensile testing was performed on TIRA test 2810 E6 Universal Testing Machine, Germany, equipped with 10 kN load cell, using TIRA test software. Strips of 10 mm width and 80 mm length were cut using a specimen cutter to prevent imperfections along the length and edges. About 50 mm initial grip separation was used and test was performed using 50 mm per minute crosshead speed. For strong gripping a thin polystyrene thermoform sheet was placed on both sides of the film strip, to avoid slippage. It is usually called the Folding Endurance and is the resistance of a strip to break upon repeated foldings at a single point. It is stated as the number of times, a strip can be folded at the same point until a fracture occurred at that point. Strip was held in two hands by thumb and index finger of each hand and then it was folded at the same point repeatedly. Number of times taken by the strip to break were reported as folding fortitude (FF). Weight uniformity is very important in the dose uniformity of formulations. The usual dose is determined per calculated surface area in the case of strips. For evaluation of the weight, six strips of each formulation were weighed individually and accurately on the electronic weighing balance, previously set to zero. Calculated mean weight and standard deviation of each formulation. FDOSs are made to dissolve in the oral cavity, so their pH should lie in the range of the buccal cavity (5.5–7.4). One strip was selected from each batch, for pH measurement it was first dissolved in 2 mL of distilled water. pH was measured by 25CW microprocessor benchtop pH/mV meter, BANTE instruments, China. The electrode of the pH meter was brought to the contact with strip solution and waited for 10 min to stabilize the pH reading then pH was noted. Percent moisture contents (%MC) were determined by drying the strips till constant weight. Hot air oven was used for drying the films. Two films from each batch were taken and their weight was noted, then these were kept in hot air oven at 50°C for 24 h (so that constant weight can be achieved). Percent moisture contents or percent moisture loss was calculated from the equation (3.5). % Moisture Contents = W o − Wd Wo x 100 Where; Wo is initial weight and Wd is dried weight of the strip. FDOSs show rapid disintegration as less as not more than one minute. Distilled water (10 mL) was poured in a petri dish that was previously heated to 37°C. About 1 × 1 cm 2 area was cut from a strip. This strip was then carefully placed on the surface of water in floating position without any sinking and sticking of strip in water and with walls, respectively. The stopwatch was turned on while placing the strip over water. Petri dish was slightly shaked and time consumed for complete disintegration was noted. Time was counted for three cut portions of a strip from each batch. Disintegration time (DT) and total dissolution time (TDT) were the studied responses. A polynomial equation with interaction and quadratic factors was developed by Multiple Linear Regression Analysis (MLRA) approach to study the selected responses. The mathematical expression of the MLRA model is described in equation , ; Y = β 0 + β 1 X 1 + β 2 X 2 + β 12 X 1 X 2 - β 1 2 X 1 2 - β 2 2 X 2 2 Where, βo was the intercept, demonstrating the mathematical mean of all numerical outcomes of 13 trials, β 1 and β 2 were the coefficients, which were calculated from the observed experimental values of Y, and X 1 and X 2 were coded levels of the independent variable. The terms X 1 X 2 and Xi (i = 1 to 2) symbolize the interaction and quadratic terms of the studied model, respectively. Analysis of variance (ANOVA) was used to validate the experimental outcomes, which were described in the form of polynomial equations. Various combinations were tried to find out optimized formulation for the delivery of MLX in the form of rapidly dissolving buccal films. The 3-D response surface methodological graphs (RSM) and 2-D contour plots were created using the output files generated by the software to describe the effects of hydrophilic polymer and plasticizer on studied parameters. , Two responses, including DT and TDT were also analyzed statistically through design expert ver. 7.0 by the application of ANOVA. Drug contents in strips were estimated by the standard assay method. For content uniformity individual film was dissolved in 250 mL media and 2 mL from this solution was diluted to 10 mL (S1) and from S1 2 mL was diluted further to 10 mL (S2). Absorbance of S1was taken at 315 nm for MH and that of S2 was taken at 226 nm for SS then % drug contents were calculated by comparing them with the MH standard dilution and SS standard dilution, respectively. In vitro drug release was determined by using the magnetic stirrer method. Distilled water was used as dissolution media in this method. 250 mL beaker was placed on a preheated hot plate magnetic stirrer, set at 37°C. Dissolution media (250 mL) was heated to 37°C in a separate beaker. Then oral strip containing one dose was attached to the inner moistened wall of the beaker. Dissolution media maintained at 37°C was immediately poured into the beaker, RPM was set to 500 and stopwatch was turned on, all three steps at the same time. As the quantity of the dissolution media was reduced to half so, the quantity of the aliquots was also reduced to half. About 2 mL was drawn at every time point and was diluted to 10 mL for analyzing MH and 2 mL from this 10 mL was taken and diluted to 10 mL for SS. Kinetic models were applied to release model in order to find out the best fit model and mechanism of release. Release data of all the formulations was evaluated for Highuchi, zero order, first order, Hixon Crowell, etc. by using DD solver ® , that is, a Microsoft ® Excel based adds in program. Fourier transform infrared analysis FTIR spectra of polyvinyl alcohol, sumatriptan succinate, metoclopramide, and optimized formulation were recorded and presented in . IR spectrum of pure PVA has revealed a number of peaks at different wave numbers. Initially, at 3271.33 cm −1 a broad band was observed due to O-H stretching vibrations of hydroxyl group followed by a strong peak at 1424.15 cm −1 due to bending vibrations of secondary O-H groups. Sharp and intense peaks at 2917.22 and 1322.17 cm −1 were due to stretching vibrations of C-H groups from aliphatic backbone. IR spectrum of sumatriptan succinate has exhibited evident peaks at 3371.21, 1299.17, 1237.15, 1082.33, and 637.12 cm −1 due to stretching vibrations of N-H bond, C-N bond, S=O functional group, and C-S bond, respectively. IR spectrum of metoclopramide exhibited prominent and sharp peaks at 3389, 3305.11, 3185.88 cm −1 due to stretching vibrations of N + -H bond, 1538 cm −1 due to aromatic vibrations of C=C, 1595.12 cm −1 due to bending vibrations of N + -H group, 1262.11 cm −1 due to C – O bond and at 1632.31 cm −1 due to bending vibrations of carbonyl (C=O) group. Optimized formulation spectrum was slightly different from the IR spectrum of individual formulation ingredients. In the IR spectrum of formulation, all the peaks present at 3271.33 and 2917.22 cm −1 due to O-H stretching vibrations of hydroxyl groups and C-H stretching vibrations of aliphatic backbone in PVA were completely absent. Few peaks of drugs were also not present in formulation spectrum. Moreover, intensity of few peaks at 1299.17, 1237.15, 1082.33 cm −1 due to stretching vibrations of the N-H group of sumatriptan succinate and C=C (1538 cm −1 ), N + -H group (1595.12 cm −1 ), C-O (1262.11 cm −1 ), C=O (1632.31 cm −1 ) was markedly reduced in formulation IR spectrum that the confirmed compatibility of ingredients, complexation and existence of both drugs within prepared oral strips. Characterization of formulations Visual inspection Prepared formulations were optically observed for their color, transparency, shine, surface touch, uniformity, and tackiness . Strips containing 150 mg PVA (1Pc, 2Pc, and 3Pc) were more transparent and uniform as compared to other formulations. At constant polymer concentration, tackiness was increased with increase in plasticizer. Because plasticizers enhance the adhesion of polymeric films to solids. Similar results were obtained from current studies with PVA films. Moreover, plasticizer also promotes crystallization of PVA molecules and migrate to the surface of films so it should be added in appropriate and precise quantities while formulating PVA films. Light microscopy Light microscopy results explained that formulations having least quantities of polymers showed the distribution of drugs as crystals than films having more polymers. So, at each plasticizer concentration, films having 150 mg polymer load maximum dissolved drugs. Among all the PVA based formulations, 1Pc showed highest uniformity . Physicochemical characterization Results have indicated that, average weight of the strips was ranged from 134.5 ± 1.04 mg for 1Pa to 192.7 ± 1.75 mg for 3Pc. Results have suggested that weight is directly proportional to the quantity of ingredients in a film and gave least SD that present least variability among formulations. Thickness of strips was ranged from 0.044 ± 0.005 mm to 0.09 ± 0.007 mm . Results have indicated that the thickness of the strips is directly proportional to the concentrations of polymeric contents of the films. The least values of SD for all the formulations indicates the minimum variation in thickness within the strip. Results of folding endurance of PVA films have exhibited excellent, good mechanical strength. All films showed no fracture. This high folding endurance of PVA films was due to high mechanical strength and flexibility of PVA. The values of pH for PVA were within the acceptance range of buccal pH. Films showed pH range 5.61 ± 0.015 for 1Pb formulation and 6.02 ± 0.015 for 3Pc. A slight increase in the pH was observed with the addition of plasticizer (glycerol) and polymer. This may be due to the fact that drugs were having an acidic nature and the addition of neutral agents may lower down the acidity. Percentage moisture contents (% MC) 1Pa to 2Pa to 3Pa, 1Pb to 2Pb to 3Pb, 1Pc to 2Pc to 3Pc in , respectively. PVA films have the concentration of glycerol (5%–10%). Glycerol is a hydrophilic plasticizer and increase in concentration of glycerol causes increase in % MC of the strips. , Disintegration time (DT) as well as total dissolving time (TD) depends on the polymer and plasticizer concentration. Maximum DT and TD values were observed in those formulations which have the highest concentration of polymer and lowest concentration of plasticizer as in 1Pc (150 mg PVA and 5% glycerol) with DT 28.2 ± 0.3 s and 27.6 ± 0.2081 s and TD 95.3 ± 2.5166 s and 77.8 ± 1.6802 s, respectively. Minimum DT and TD values were observed where the polymer was at minimum concentration while plasticizer was maximum as in 3Pa (100 mg PVA and 10% glycerol), which have DT 7.7 ± 0.2081 s and TD 26.4 ± 0.7937 s, respectively . Results showed that with increase in concentration of polymer when plasticizer became constant DT and TD increases, but when polymer became constant and plasticizer ratio changes DT and TD decreases, DT and TD values increases with increases polymer concentration because it increases the thickness of film so as thickness increases films will take more time to dissolve and disintegrate while decrease in DT and TD with increased plasticizer was an attribute of depression of strength of the films with increase in plasticizer. Data analysis and numerical Optimization DT = 11 . 79 - 34 . 16 X 1 + 19 . 16 X 2 - 34 . 47 X 1 X 2 + 38 . 40 X 1 2 + 7 . 70 X 2 2 TDT = 61 . 01 - 112 . 72 X 1 + 42 . 96 X 2 - 117 . 33 X 1 X 2 + 115 . 74 X 1 2 + 39 . 90 X 2 2 Results of ANOVA for the response surface quadratic model have suggested that the applied model was significant. Values of F, P , and R 2 for both responses were significant . Furthermore, polynomial equations generated through the application of ANOVA have indicated that responses are constructive as the values of the mean are positive (11.79 and 61.01 for DT and TDT, respectively). However, the negative value of X 1 , which was the first variable (PVA) were advocating the comparatively resistive nature of the polymer against the disintegration as well as total dissolving times of the prepared films. On the other hand, glycerol, being a good solubilizer and plasticizer has left its impact by providing assistance in breaking and solubilizing the films. Illustration in contour as well as 3D graphs has been strengthening the findings of the studies that polymer is increasing and plasticizer has been decreasing the both DT and TDT significantly. It is the common observation in the literature that, usually, increase in the polymeric contents of the polymer is the reason behind the possible delay of solubilization of the films. While plasticizers like glycerol and PEG 400 etc. can decrease the dissolution time of strips. This might be ascribed by the fact that plasticizer decreases the confrontation of the polymer based film against the solubility, as it seeped out from the films, as they exposed to the dissolution medium. Loss of plasticizer from the film improves the penetration of the aqueous contents and henceforth quicker dissolution of the films and lower dissolving time. On the other hand PVA has the ability to form a swollen matrix that would be the cause of dissolution and less loss of the constituents of prepared films . Considering the instant disintegration and rapid dissolution as factors, and their minimum values as desired target, the optimized values of the polymer and plasticizer were found to be 1.50 and 0.97, respectively. The formulation prepared by optimized values, the formulation has disintegrated and dissolved in approximately 7 and 25 s, respectively. Tensile testing Tensile strength depends on the concentration of plasticizer, as it modulates the tensile properties of the films. Flexibility of the films can be increased by increasing the percentage elongation values and decreasing the tensile strength and modulus of elasticity. , As described in that tensile strength and Young Modulus were decreased and elongation break increased by increase in plasticizer at specific polymer concentration. At a constant concentration of the plasticizer, an increase in the concentration of polymer would be the cause of the decrease in Tensile Strength and Young modulus because these characteristics are inversely related to the thickness which increased by increasing the polymer concentration. But Elongation break (EB) increases because plasticizer concentration was constant in regard to polymer but its amount was increased with the polymer in regard to the individual film and EB is directly related to plasticizer concentrations. Drug contents PVA based formulations have shown the ability to hold the satisfactory amount of the drugs as % contents of MH ranged from 99.2 ± 0.007 in 3Pb to 101.6 ± 1.1 in 1Pb and % contents of SS ranged from 99.1 ± 1.6 in 1Pc to 102.3 ± 0.1 in 1Pb . All results lie within the acceptance criteria of USP27 (85%–115%). In-vitro drug release Different concentrations of 2 to 20 µg/mL were prepared to construct a calibration curve by using a UV spectrophotometer. Absorbance of SS for each concentration in the presence of MH at 226 nm and observance of MH for each concentration in the presence of SS at 315 nm were measured. The curve of SS showed linearity over the range of 2 to 18 µg with a coefficient of correlation and linear regression values of 0.9993 and y = 0.1481x + 0.0109 while the curve of MH was shown linearity over the range of 2 to 20 µg with a coefficient of correlation and linear regression values of 0.9992 and y = 0.031x − 0.0083 . Dissolution results In vitro dissolution test was performed on a magnetic stirrer apparatus containing 250 mL distilled water in a 250 mL beaker with 1.5 inch magnet bar on magnetic stirrer at 500 rpm. PVA based formulations have shown complete dissolution of 1Pa, 1Pb, and 1Pc in 9 min (101.6% MH and 98.69% SS), 13 min (100.8% MH and 102.5% SS) and 15 min (100% MH and 100.2% SS), respectively . The complete dissolution, 2Pa gave 98.46% release of MH in 7 min and 102.1% release of SS in 9 min. 2Pb gave 99.22% release of MH and 101.8% release of SS in 11 min and 2Pc gave 100% release of MH and 102.1% release of SS in 13 min . Results showed complete dissolution of 3Pa in 9 min releasing 99.61% MH and 101% SS and 3Pb in 11 min releasing 99.23% MH and 102.3% SS. 3Pc released 99.19% MH in 13 min and 100.7% SS in 11 min . Kinetic analysis DD solver ® was employed on all formulations which showed complete release and significant results. For this all kinetic model like Zero order, first order, Higuchi, Korsmeyer–Peppas and Hixon-Croswell models were applied on the release profile of such formulations. Results demonstrated that the coefficient of correlation ( R 2 ) in zero order for all the formulations of PVA had lower values. These values were lower than all other models, for both MH and SS release data. So the drug release was not time dependent. But R 2 values of first order kinetics for all formulations were higher than 0.9 for both MH and SS, indicating concentration dependent release. The data plots of Higuchi model represented high R 2 values more than 0.9 except for SS in 1Pa. So it can be concluded that the diffusion mechanism of drug release was nearly followed. Hixon-Crowell model data fitting also gave values of regression, correlation co-efficient R 2 for all the formulations, higher than 0.9 for both MH and SS, indicating a time dependent change in surface area of the formulations. On fitting the beaker stirring dissolution data of formulations in Korsemeyer-Peppas model, values of R 2 were higher than all other models ranging from 0.9983 to 1.000 for MH and 0.9960 to 1.000 for SS. These extremely high R 2 values confirmed the diffusion mechanism of drug release. n values for all the formulations were less than 0.5 thus representing Fickian diffusion. AIC values of Korsmeyer-peppas model were smaller and even perfect representing the best fit of data in this model. Thermal analysis DSC thermograms of individual ingredients as well as optimized formulation were recorded to confirm their stability as shown in . Metoclopramide hydrochloride thermogram has indicated initial endothermic peak due to the loss of moisture. DSC thermograms of Metoclopramide hydrochloride (A), HMPC E5 (B), Sumatriptin succinate (C) and PVA(D) have shown prominent endothermic peaks at their melting points, that is, 171°C, 165°C, 171−180°C, and 200°C, respectively. DSC thermogram of optimized formulation was also recorded to check the stability of developed delivery system for combinational therapy, initially there was a short endothermic peak near 80°C due to the loss of moisture that was followed by a broad endothermic peak 225°C−375°C. These studies have proved that stability of the developed was markedly increased at higher temperature values . Thermal gravimetric analysis (TGA) studies were carried out to assess the stability of neat ingredients and fabricated patches over an increasing temperature range. TGA thermogram of Metoclopramide hydrochloride reflected initial mass loss of 25.21% at 110°C. Further loss of mass loss of 20% occurred at 200°C and 55% mass remained intact. Likewise, TGA thermogram of PVA has revealed 73.23% mass loss of PVA near 220°C and 25% mass loss retained intact over the entire temperature range. Sumatriptan succinate remained intact, that is, 52% at 231°C. HPMC E5 thermogram has shown a mass loss of 43% at 237°C. However, TGA thermogram of developed matrix patches have shown that 35% mass remained intact at higher temperature 310.15°C. Thermal studies have proved the stability of polymeric patches was improved at higher temperature values . FTIR spectra of polyvinyl alcohol, sumatriptan succinate, metoclopramide, and optimized formulation were recorded and presented in . IR spectrum of pure PVA has revealed a number of peaks at different wave numbers. Initially, at 3271.33 cm −1 a broad band was observed due to O-H stretching vibrations of hydroxyl group followed by a strong peak at 1424.15 cm −1 due to bending vibrations of secondary O-H groups. Sharp and intense peaks at 2917.22 and 1322.17 cm −1 were due to stretching vibrations of C-H groups from aliphatic backbone. IR spectrum of sumatriptan succinate has exhibited evident peaks at 3371.21, 1299.17, 1237.15, 1082.33, and 637.12 cm −1 due to stretching vibrations of N-H bond, C-N bond, S=O functional group, and C-S bond, respectively. IR spectrum of metoclopramide exhibited prominent and sharp peaks at 3389, 3305.11, 3185.88 cm −1 due to stretching vibrations of N + -H bond, 1538 cm −1 due to aromatic vibrations of C=C, 1595.12 cm −1 due to bending vibrations of N + -H group, 1262.11 cm −1 due to C – O bond and at 1632.31 cm −1 due to bending vibrations of carbonyl (C=O) group. Optimized formulation spectrum was slightly different from the IR spectrum of individual formulation ingredients. In the IR spectrum of formulation, all the peaks present at 3271.33 and 2917.22 cm −1 due to O-H stretching vibrations of hydroxyl groups and C-H stretching vibrations of aliphatic backbone in PVA were completely absent. Few peaks of drugs were also not present in formulation spectrum. Moreover, intensity of few peaks at 1299.17, 1237.15, 1082.33 cm −1 due to stretching vibrations of the N-H group of sumatriptan succinate and C=C (1538 cm −1 ), N + -H group (1595.12 cm −1 ), C-O (1262.11 cm −1 ), C=O (1632.31 cm −1 ) was markedly reduced in formulation IR spectrum that the confirmed compatibility of ingredients, complexation and existence of both drugs within prepared oral strips. Visual inspection Prepared formulations were optically observed for their color, transparency, shine, surface touch, uniformity, and tackiness . Strips containing 150 mg PVA (1Pc, 2Pc, and 3Pc) were more transparent and uniform as compared to other formulations. At constant polymer concentration, tackiness was increased with increase in plasticizer. Because plasticizers enhance the adhesion of polymeric films to solids. Similar results were obtained from current studies with PVA films. Moreover, plasticizer also promotes crystallization of PVA molecules and migrate to the surface of films so it should be added in appropriate and precise quantities while formulating PVA films. Light microscopy Light microscopy results explained that formulations having least quantities of polymers showed the distribution of drugs as crystals than films having more polymers. So, at each plasticizer concentration, films having 150 mg polymer load maximum dissolved drugs. Among all the PVA based formulations, 1Pc showed highest uniformity . Physicochemical characterization Results have indicated that, average weight of the strips was ranged from 134.5 ± 1.04 mg for 1Pa to 192.7 ± 1.75 mg for 3Pc. Results have suggested that weight is directly proportional to the quantity of ingredients in a film and gave least SD that present least variability among formulations. Thickness of strips was ranged from 0.044 ± 0.005 mm to 0.09 ± 0.007 mm . Results have indicated that the thickness of the strips is directly proportional to the concentrations of polymeric contents of the films. The least values of SD for all the formulations indicates the minimum variation in thickness within the strip. Results of folding endurance of PVA films have exhibited excellent, good mechanical strength. All films showed no fracture. This high folding endurance of PVA films was due to high mechanical strength and flexibility of PVA. The values of pH for PVA were within the acceptance range of buccal pH. Films showed pH range 5.61 ± 0.015 for 1Pb formulation and 6.02 ± 0.015 for 3Pc. A slight increase in the pH was observed with the addition of plasticizer (glycerol) and polymer. This may be due to the fact that drugs were having an acidic nature and the addition of neutral agents may lower down the acidity. Percentage moisture contents (% MC) 1Pa to 2Pa to 3Pa, 1Pb to 2Pb to 3Pb, 1Pc to 2Pc to 3Pc in , respectively. PVA films have the concentration of glycerol (5%–10%). Glycerol is a hydrophilic plasticizer and increase in concentration of glycerol causes increase in % MC of the strips. , Disintegration time (DT) as well as total dissolving time (TD) depends on the polymer and plasticizer concentration. Maximum DT and TD values were observed in those formulations which have the highest concentration of polymer and lowest concentration of plasticizer as in 1Pc (150 mg PVA and 5% glycerol) with DT 28.2 ± 0.3 s and 27.6 ± 0.2081 s and TD 95.3 ± 2.5166 s and 77.8 ± 1.6802 s, respectively. Minimum DT and TD values were observed where the polymer was at minimum concentration while plasticizer was maximum as in 3Pa (100 mg PVA and 10% glycerol), which have DT 7.7 ± 0.2081 s and TD 26.4 ± 0.7937 s, respectively . Results showed that with increase in concentration of polymer when plasticizer became constant DT and TD increases, but when polymer became constant and plasticizer ratio changes DT and TD decreases, DT and TD values increases with increases polymer concentration because it increases the thickness of film so as thickness increases films will take more time to dissolve and disintegrate while decrease in DT and TD with increased plasticizer was an attribute of depression of strength of the films with increase in plasticizer. Prepared formulations were optically observed for their color, transparency, shine, surface touch, uniformity, and tackiness . Strips containing 150 mg PVA (1Pc, 2Pc, and 3Pc) were more transparent and uniform as compared to other formulations. At constant polymer concentration, tackiness was increased with increase in plasticizer. Because plasticizers enhance the adhesion of polymeric films to solids. Similar results were obtained from current studies with PVA films. Moreover, plasticizer also promotes crystallization of PVA molecules and migrate to the surface of films so it should be added in appropriate and precise quantities while formulating PVA films. Light microscopy results explained that formulations having least quantities of polymers showed the distribution of drugs as crystals than films having more polymers. So, at each plasticizer concentration, films having 150 mg polymer load maximum dissolved drugs. Among all the PVA based formulations, 1Pc showed highest uniformity . Results have indicated that, average weight of the strips was ranged from 134.5 ± 1.04 mg for 1Pa to 192.7 ± 1.75 mg for 3Pc. Results have suggested that weight is directly proportional to the quantity of ingredients in a film and gave least SD that present least variability among formulations. Thickness of strips was ranged from 0.044 ± 0.005 mm to 0.09 ± 0.007 mm . Results have indicated that the thickness of the strips is directly proportional to the concentrations of polymeric contents of the films. The least values of SD for all the formulations indicates the minimum variation in thickness within the strip. Results of folding endurance of PVA films have exhibited excellent, good mechanical strength. All films showed no fracture. This high folding endurance of PVA films was due to high mechanical strength and flexibility of PVA. The values of pH for PVA were within the acceptance range of buccal pH. Films showed pH range 5.61 ± 0.015 for 1Pb formulation and 6.02 ± 0.015 for 3Pc. A slight increase in the pH was observed with the addition of plasticizer (glycerol) and polymer. This may be due to the fact that drugs were having an acidic nature and the addition of neutral agents may lower down the acidity. Percentage moisture contents (% MC) 1Pa to 2Pa to 3Pa, 1Pb to 2Pb to 3Pb, 1Pc to 2Pc to 3Pc in , respectively. PVA films have the concentration of glycerol (5%–10%). Glycerol is a hydrophilic plasticizer and increase in concentration of glycerol causes increase in % MC of the strips. , Disintegration time (DT) as well as total dissolving time (TD) depends on the polymer and plasticizer concentration. Maximum DT and TD values were observed in those formulations which have the highest concentration of polymer and lowest concentration of plasticizer as in 1Pc (150 mg PVA and 5% glycerol) with DT 28.2 ± 0.3 s and 27.6 ± 0.2081 s and TD 95.3 ± 2.5166 s and 77.8 ± 1.6802 s, respectively. Minimum DT and TD values were observed where the polymer was at minimum concentration while plasticizer was maximum as in 3Pa (100 mg PVA and 10% glycerol), which have DT 7.7 ± 0.2081 s and TD 26.4 ± 0.7937 s, respectively . Results showed that with increase in concentration of polymer when plasticizer became constant DT and TD increases, but when polymer became constant and plasticizer ratio changes DT and TD decreases, DT and TD values increases with increases polymer concentration because it increases the thickness of film so as thickness increases films will take more time to dissolve and disintegrate while decrease in DT and TD with increased plasticizer was an attribute of depression of strength of the films with increase in plasticizer. DT = 11 . 79 - 34 . 16 X 1 + 19 . 16 X 2 - 34 . 47 X 1 X 2 + 38 . 40 X 1 2 + 7 . 70 X 2 2 TDT = 61 . 01 - 112 . 72 X 1 + 42 . 96 X 2 - 117 . 33 X 1 X 2 + 115 . 74 X 1 2 + 39 . 90 X 2 2 Results of ANOVA for the response surface quadratic model have suggested that the applied model was significant. Values of F, P , and R 2 for both responses were significant . Furthermore, polynomial equations generated through the application of ANOVA have indicated that responses are constructive as the values of the mean are positive (11.79 and 61.01 for DT and TDT, respectively). However, the negative value of X 1 , which was the first variable (PVA) were advocating the comparatively resistive nature of the polymer against the disintegration as well as total dissolving times of the prepared films. On the other hand, glycerol, being a good solubilizer and plasticizer has left its impact by providing assistance in breaking and solubilizing the films. Illustration in contour as well as 3D graphs has been strengthening the findings of the studies that polymer is increasing and plasticizer has been decreasing the both DT and TDT significantly. It is the common observation in the literature that, usually, increase in the polymeric contents of the polymer is the reason behind the possible delay of solubilization of the films. While plasticizers like glycerol and PEG 400 etc. can decrease the dissolution time of strips. This might be ascribed by the fact that plasticizer decreases the confrontation of the polymer based film against the solubility, as it seeped out from the films, as they exposed to the dissolution medium. Loss of plasticizer from the film improves the penetration of the aqueous contents and henceforth quicker dissolution of the films and lower dissolving time. On the other hand PVA has the ability to form a swollen matrix that would be the cause of dissolution and less loss of the constituents of prepared films . Considering the instant disintegration and rapid dissolution as factors, and their minimum values as desired target, the optimized values of the polymer and plasticizer were found to be 1.50 and 0.97, respectively. The formulation prepared by optimized values, the formulation has disintegrated and dissolved in approximately 7 and 25 s, respectively. Tensile strength depends on the concentration of plasticizer, as it modulates the tensile properties of the films. Flexibility of the films can be increased by increasing the percentage elongation values and decreasing the tensile strength and modulus of elasticity. , As described in that tensile strength and Young Modulus were decreased and elongation break increased by increase in plasticizer at specific polymer concentration. At a constant concentration of the plasticizer, an increase in the concentration of polymer would be the cause of the decrease in Tensile Strength and Young modulus because these characteristics are inversely related to the thickness which increased by increasing the polymer concentration. But Elongation break (EB) increases because plasticizer concentration was constant in regard to polymer but its amount was increased with the polymer in regard to the individual film and EB is directly related to plasticizer concentrations. PVA based formulations have shown the ability to hold the satisfactory amount of the drugs as % contents of MH ranged from 99.2 ± 0.007 in 3Pb to 101.6 ± 1.1 in 1Pb and % contents of SS ranged from 99.1 ± 1.6 in 1Pc to 102.3 ± 0.1 in 1Pb . All results lie within the acceptance criteria of USP27 (85%–115%). Different concentrations of 2 to 20 µg/mL were prepared to construct a calibration curve by using a UV spectrophotometer. Absorbance of SS for each concentration in the presence of MH at 226 nm and observance of MH for each concentration in the presence of SS at 315 nm were measured. The curve of SS showed linearity over the range of 2 to 18 µg with a coefficient of correlation and linear regression values of 0.9993 and y = 0.1481x + 0.0109 while the curve of MH was shown linearity over the range of 2 to 20 µg with a coefficient of correlation and linear regression values of 0.9992 and y = 0.031x − 0.0083 . In vitro dissolution test was performed on a magnetic stirrer apparatus containing 250 mL distilled water in a 250 mL beaker with 1.5 inch magnet bar on magnetic stirrer at 500 rpm. PVA based formulations have shown complete dissolution of 1Pa, 1Pb, and 1Pc in 9 min (101.6% MH and 98.69% SS), 13 min (100.8% MH and 102.5% SS) and 15 min (100% MH and 100.2% SS), respectively . The complete dissolution, 2Pa gave 98.46% release of MH in 7 min and 102.1% release of SS in 9 min. 2Pb gave 99.22% release of MH and 101.8% release of SS in 11 min and 2Pc gave 100% release of MH and 102.1% release of SS in 13 min . Results showed complete dissolution of 3Pa in 9 min releasing 99.61% MH and 101% SS and 3Pb in 11 min releasing 99.23% MH and 102.3% SS. 3Pc released 99.19% MH in 13 min and 100.7% SS in 11 min . DD solver ® was employed on all formulations which showed complete release and significant results. For this all kinetic model like Zero order, first order, Higuchi, Korsmeyer–Peppas and Hixon-Croswell models were applied on the release profile of such formulations. Results demonstrated that the coefficient of correlation ( R 2 ) in zero order for all the formulations of PVA had lower values. These values were lower than all other models, for both MH and SS release data. So the drug release was not time dependent. But R 2 values of first order kinetics for all formulations were higher than 0.9 for both MH and SS, indicating concentration dependent release. The data plots of Higuchi model represented high R 2 values more than 0.9 except for SS in 1Pa. So it can be concluded that the diffusion mechanism of drug release was nearly followed. Hixon-Crowell model data fitting also gave values of regression, correlation co-efficient R 2 for all the formulations, higher than 0.9 for both MH and SS, indicating a time dependent change in surface area of the formulations. On fitting the beaker stirring dissolution data of formulations in Korsemeyer-Peppas model, values of R 2 were higher than all other models ranging from 0.9983 to 1.000 for MH and 0.9960 to 1.000 for SS. These extremely high R 2 values confirmed the diffusion mechanism of drug release. n values for all the formulations were less than 0.5 thus representing Fickian diffusion. AIC values of Korsmeyer-peppas model were smaller and even perfect representing the best fit of data in this model. DSC thermograms of individual ingredients as well as optimized formulation were recorded to confirm their stability as shown in . Metoclopramide hydrochloride thermogram has indicated initial endothermic peak due to the loss of moisture. DSC thermograms of Metoclopramide hydrochloride (A), HMPC E5 (B), Sumatriptin succinate (C) and PVA(D) have shown prominent endothermic peaks at their melting points, that is, 171°C, 165°C, 171−180°C, and 200°C, respectively. DSC thermogram of optimized formulation was also recorded to check the stability of developed delivery system for combinational therapy, initially there was a short endothermic peak near 80°C due to the loss of moisture that was followed by a broad endothermic peak 225°C−375°C. These studies have proved that stability of the developed was markedly increased at higher temperature values . Thermal gravimetric analysis (TGA) studies were carried out to assess the stability of neat ingredients and fabricated patches over an increasing temperature range. TGA thermogram of Metoclopramide hydrochloride reflected initial mass loss of 25.21% at 110°C. Further loss of mass loss of 20% occurred at 200°C and 55% mass remained intact. Likewise, TGA thermogram of PVA has revealed 73.23% mass loss of PVA near 220°C and 25% mass loss retained intact over the entire temperature range. Sumatriptan succinate remained intact, that is, 52% at 231°C. HPMC E5 thermogram has shown a mass loss of 43% at 237°C. However, TGA thermogram of developed matrix patches have shown that 35% mass remained intact at higher temperature 310.15°C. Thermal studies have proved the stability of polymeric patches was improved at higher temperature values . Co-administration of anti-migraine and anti-emetic drugs was successfully achieved by formulating fast dissolving oral strips. Developed formulations were uniform, stable, economical, and have potentiated dissolution assisted bioavailability without involving superdisintegrants and surfactant in composition. Frequency of dosing was decreased and patient compliance was remarkably increased. Moreover, FDOS’s can serve as an ideal delivery system for drugs having poor oral bioavailability, short half-life and those are degraded within the stomach. Aims and objectives of the studies have been achieved by formulating fast dissolving buccal films co-loaded with the combination of anti-migraine and anti-emetic drugs. Rapid disintegration and dissolution have been achieved, followed by the instant release of both drugs. It would be a wrathful addition to improve the patient compliance not only by reducing the cost, (as neither superdisintegrant nor any surfactant has been added in the formulation) but also enhancing the bioavailability due the rapid dissolution followed by absorption through buccal cavity and hence avoiding the hepatic metabolism.
The Effectiveness of Sensory Adaptive Dental Environments to Reduce Corresponding Negative Behaviours and Psychophysiology Responses in Children and Young People with Intellectual and Developmental Disabilities: A Protocol of a Systematic Review and Meta-Analysis
e0bb0583-5afc-4dc7-8da3-27e15300b4c6
9654101
Physiology[mh]
Intellectual and Developmental Disabilities (IDDs) are a group of conditions due to physical, learning, language, or behaviour impairments. These conditions impact day-to-day functioning and include attention deficit hyperactivity disorder, autism spectrum disorder, cerebral palsy, down syndrome, intellectual disability, learning disability, and other developmental delays as classified by the American Psychiatric Association . IDDs contribute significantly to total disease burden globally . A high prevalence of IDDs has been documented in the United States in a systematic review by Anderson et al. , with a prevalence of 69.9 per 1000 for children and 41.0 per 1000 for adults. This picture is not unique to the United States with several other studies having documented similar high prevalence in countries such as Australia and the United Kingdom . Evidence suggests the existence of health inequalities between people with and without IDD are closely tied to individual, environmental, social, and/or economic determinants . Of particular concern is oral health for people with IDD, with research showing that this population has poorer oral health than those without, specifically significant higher levels of untreated dental caries, periodontal disease, dental plaque, worse gingival status, and fewer decayed and filled permanent and primary teeth . The implications of poor oral health are substantial, with emerging research highlighting the destructive impact on one’s general health . This vulnerability to poorer oral health in IDD population has been linked to their complex needs increasing their risk of dental disease and challenges in accessing both routine and preventative dental services . Several studies emphasise barriers for individuals with IDDs in accessing dental care reported by individuals, families, and dental practitioners. Barriers that studies have identified include over-stimulating physical environments, challenges with waiting room, sensory processing issues, hyper-empathy, oral aversion, negative adaptive behaviours, and decreased health provider knowledge or education . A study presented that dentists’ attitudes serve as a barrier to providing care for people with IDD including difficulties with behaviour management, inadequate training, previous experiences and severity of patient’s condition . A large amount of evidence supports that dental anxiety is exacerbated by the dental environment . Dental anxiety is a psychophysiological state described as an abnormal fear or worry in apprehension of dental treatment . Dental anxiety has been described as multidimensional including somatic, cognitive, and emotional aspects . Dental anxiety causes altered behaviour characterised by increased aggression and avoidance, plus physical symptoms detailed as sweating, decreased gastrointestinal motility and cutaneous vasoconstriction . Dental anxiety is significant for people with IDD . For IDD population, this has been linked to low compliance and psychological, cognitive, and behavioural consequences during treatment impacting overall oral health . There is a growing body of evidence linking experiences of dental anxiety for children and young people with IDD to sensory processing issues. Current research has identified high prevalence of co-morbid sensory processing difficulties with modulation or discrimination issues with maladaptive behaviours for this population . Sensory integration is a neurological function that processes and organises sensory modality inputs from one’s own body and the environment for functional outputs that enables an individual to engage in activities of daily living effectively . Evidence emphasises the impact of sensory processing challenges on behaviour which has been linked with non-compliance and reduced engagement in dental appointments and oral health practices . Lee and Chang uncovered that children with IDDs were particularly sensitive to the noise, smell, and visual images of sharp instruments, including needles in environment that increased their dental anxiety and refusals which is provoked by the increased sensory processing issues relating to the dental environment. Multiple approaches are used to address these barriers such as pharmacological sedation, as well as non-pharmacological approaches including restrictive restraint boards, desensitisation, behavioural and cognitive training, video-modelling, reinforcement, distraction/relaxation, tell-show-do techniques and social stories . Restrictive and pharmacological practices do not address the underlying factors that cause behaviour and limit personal freedom, quality of life and participation . Importantly, evidence highlights sedation fails to promote preventive oral care . This suggests that additional measures need to be employed to increase participation in regular dental treatment to reduce oral health burden. Specifically, SADE have been studied to reduce maladaptive behaviours for the IDD population . This approach uses a “Snoezelen room”, a well-equipped multi-sensory environment combining good lighting, mesmerising sound, deep pressure, vibration, aroma, and tactile sensation . The implementation of these sensory adaptions aims to regulate sensory ‘flight or fight’ responses to reduce associated maladaptive behaviours and in-turn reduce dental anxiety . Although, there is a growing body of evidence to suggest that SADE are effective to regulate maladaptive behaviours associated with dental anxiety, there is a lack of high-quality synthesised evidence. To inform best practice, rigorous research needs to be conducted on SADE to inform dental professionals to better address individuals with IDDs needs to reduce dental anxiety and improve oral health. Multiple scoping searches were conducted to assess previous systematic reviews in this topic area and the assessment of three reviews identified various knowledge gaps. Two reviews focused broadly on non-pharmaceutical strategies in general and therefore, only had limited information on sensory adaptive environments as an approach . Another recent systematic review conducted by Ismail et al. had poorly reported methods to replicate and population was not specific in diagnosis and the study population was without a specific diagnosis focusing generally on children 6–12 years of age. Therefore, there is no synthesis of literature that encompasses children to young people. To inform best practice, rigorous research needs to be conducted to synthesise evidence for SADE used by dentists to assist with management of children and young people with IDD. Therefore, the purpose of this systematic review is to assess the effectiveness of SADE to reduce dental anxiety and corresponding negative behaviours and psychophysiology responses in children and young people with IDD. What is the role of SADE in providing dental treatment for children and young people (upto the ages of 24 years) with IDD? This review aims to address the following: What are common sensory environmental strategies used to decrease negative behaviours and psychophysiology responses of dental anxiety in children and young people with IDD? Are SADE effective to reduce dental anxiety, corresponding negative behaviours and psychophysiology responses in children and young people with IDD? Do SADE increase children and young people with IDD participation in oral health procedures? 3.1. Participants This review will consider studies that included participants who are children or young people up to the ages of 24 years with a diagnosis of IDD confirmed by the physicians. There was no restriction on type or severity of the IDDs. 3.2. Intervention(s) This review will consider studies that evaluate sensory adaptive environments in a dental setting during oral procedures or waiting room. The interventions must aim to modulate sensory sensitivities that targets any of the senses; sight, sound, touch, smell, taste, vestibular (sense of head movement in space), proprioception (sensations from muscles and joints) and interception (sensations in relation to physiological/physical condition of the body). These strategies can include but are not limited to partially dimmed room with lighting effects, vibroacoustic, somatosensory stimuli, visual distraction, or deep pressure. Intervention can involve single or multi-sensory approaches. Dental procedures conducted in studies that involve sedation will be excluded from the review. 3.3. Comparator(s) This review will consider studies that compare the intervention to control (no intervention), waitlist or usual care (regular dental environment). 3.4. Outcomes This review will consider studies that measure either objective or subjective measures of cooperation, behaviour, and psychophysiology (anxiety). Primary outcomes will be categorised according to the International Classification of Functioning (ICF) and adapted from the oral health framework by Faulks and colleagues . These include participation restriction and body structure and function. Participation restriction and activity participation outcome will focus on behaviour and cooperation during the dental procedure. Examples of acceptable outcome measure include compliance, cooperation, or participation scores (Frankl score, negative behaviour checklist, children’s dental behaviour rating scale or anxiety and cooperation scale), interviews or questionnaires of dentists, participants, or carers. Body structure and function outcomes include anxiety and psychophysiology responses. Examples of acceptable outcome measure include oxygen saturation, electrodermal activity, heart rate, and skin conductance. 3.5. Types of Studies This review will consider only RCTs including parallel-group, crossover, cluster, and factorial design. Non-experimental observational and non-randomised study designs including pre-poststudy designs will be excluded from this review. This review will consider studies that included participants who are children or young people up to the ages of 24 years with a diagnosis of IDD confirmed by the physicians. There was no restriction on type or severity of the IDDs. This review will consider studies that evaluate sensory adaptive environments in a dental setting during oral procedures or waiting room. The interventions must aim to modulate sensory sensitivities that targets any of the senses; sight, sound, touch, smell, taste, vestibular (sense of head movement in space), proprioception (sensations from muscles and joints) and interception (sensations in relation to physiological/physical condition of the body). These strategies can include but are not limited to partially dimmed room with lighting effects, vibroacoustic, somatosensory stimuli, visual distraction, or deep pressure. Intervention can involve single or multi-sensory approaches. Dental procedures conducted in studies that involve sedation will be excluded from the review. This review will consider studies that compare the intervention to control (no intervention), waitlist or usual care (regular dental environment). This review will consider studies that measure either objective or subjective measures of cooperation, behaviour, and psychophysiology (anxiety). Primary outcomes will be categorised according to the International Classification of Functioning (ICF) and adapted from the oral health framework by Faulks and colleagues . These include participation restriction and body structure and function. Participation restriction and activity participation outcome will focus on behaviour and cooperation during the dental procedure. Examples of acceptable outcome measure include compliance, cooperation, or participation scores (Frankl score, negative behaviour checklist, children’s dental behaviour rating scale or anxiety and cooperation scale), interviews or questionnaires of dentists, participants, or carers. Body structure and function outcomes include anxiety and psychophysiology responses. Examples of acceptable outcome measure include oxygen saturation, electrodermal activity, heart rate, and skin conductance. This review will consider only RCTs including parallel-group, crossover, cluster, and factorial design. Non-experimental observational and non-randomised study designs including pre-poststudy designs will be excluded from this review. The proposed systematic review will be conducted in accordance with Joanna Briggs Institute (JBI) methodology for systematic reviews of effectiveness and reported in accordance with the PRISMA guidelines . PRISMA for systematic review protocol (PRISMA-P) was used to draft the protocol. The review is registered with PROSPERO (CRD42022322083). 4.1. Search Strategy The search strategy have been developed following a Population Intervention Comparator Outcome and Study Design (PICOS) framework. A combination of Medical Subject Headings (MESH) terms and keywords using Boolean operators, spelling variations, phrase searching, and truncation have been devised to increase sensitivity and ensure satisfactory search retrieval. Two reviewers (K.R. and N.C.), having experience with database searching, with consultation from a Health Sciences Librarian, have pre-test the search strategy in the Medline (OVID) . Once the Medline search is finalised, the search will be subsequently adapted to the syntax and subject headings of the other databases. Finally, a final hand search of the reference lists of relevant studies that match inclusion criteria and previously published systematic reviews will be conducted to identify further eligible studies. The following electronic databases will be searched, without any restriction on publication date, type, language, or region: Medline (OVID), The Cochrane Library, Embase, Web of Science, Google Scholar and OT Seeker. For this review, articles will be included with no restriction on language as any relevant non-English articles will be attempted to be translated into English. 4.2. Study Selection Studies identified by electronic databases and study citations from screening will be imported into EndNote X9 (Clarivate Analytics, Philadelphia, PA, USA), and duplicates removed. Following a pilot test, two independent reviewers (K.R. and A.A.) will screen the title and abstracts against the strict inclusion/exclusion criteria, and if unclear, the full text will be retrieved. Articles that meet the inclusion criteria will be retrieved in full and their details imported into the JBI System for the Unified Management, Assessment and Review of Information (JBI SUMARI) . Two reviewers (K.R. and A.A.) will then independently assess the full-text articles and decide whether these meet the eligibility criteria. If required, the study authors will be contacted to seek additional information. Any disagreements that arise between the reviewers at any stage of the selection process will be resolved through discussion including a third reviewer (R.C.). If multiple reports are published from a single study, these will be linked together. Throughout this process, all reasons for exclusion of papers at the full text review stage will be recorded. The results of the study selection process will be presented in a PRISMA flow diagram . 4.3. Assessment of Methodological Quality Two reviewers (K.R. and R.C.) will independently assess the methodological quality of each included study in this review. The Cochrane tool for assessing risk of bias in randomised trials (RoB-2 tool for cross-over trails) will be used . Any disagreements that arise between the reviewers will be resolved through discussion and include a third reviewer (A.A.). Authors of papers will be contacted to request missing or additional data for clarification, where required. If a response is not received after two contact attempts, we will assess the study based on its available information. The level of risk of bias in each of these domains will be presented separately for each study in tables, figures and contextualised in a descriptive format in the final review publication. This narrative will outline the methodological issues and how these influenced the interpretation of the results. All studies included in this review will undergo data extraction and synthesis irrespective of the results of their methodological quality. 4.4. Data Extraction A standardised data extraction form has been developed and will be pilot tested (on one study), and subsequently refined to ensure that we capture all relevant data (see ; , and ). A calibration exercise will be performed to ensure consistency across reviewers. Two review authors (K.R. and A.A.) will independently extract data, discrepancies will be identified and resolved through discussion with a third author (N.C.) where necessary. Authors of papers will be contacted to request missing or additional data, where required. If a response is not received after two contact attempts, we will assess the study based on its available information. The data extracted will be entered into an excel sheet including specific details about the study: article details, participant characteristics, intervention description, outcome measures and funding. 4.5. Data Synthesis If possible, individual studies will be pooled for a statistical meta-analysis using STATA v15 (StataCorp, College Station, TX, USA) . An assessment of the studies’ suitability for pooled analysis will be made following the data extraction process. Odds ratios (for dichotomous data) or weighted (or standardised) final post-intervention mean differences (for continuous data) will be used to calculate effect sizes, and their 95% confidence intervals will be calculated for analysis. The degree of statistical heterogeneity will be assessed using standard I-squared and Chi-squared statistics . Some degree of heterogeneity is expected across the studies; therefore, the random effects model for meta-analysis will be applied . Subgroup analyses will be conducted where there are sufficient data (if over 10 studies) based on age, diagnosis, and severity. Sensitivity analyses will be conducted to test the impact of risk of bias of included studies on outcomes. Where statistical pooling is not possible and/or there is substantial heterogeneity, a narrative synthesis of the study findings including tables and figures will be provided. Publication bias will be assessed using a funnel plot if there are 10 or more studies included in a meta-analysis . Statistical tests for funnel plot asymmetry (Egger test, Begg test, Harbord test) will be performed where appropriate. The search strategy have been developed following a Population Intervention Comparator Outcome and Study Design (PICOS) framework. A combination of Medical Subject Headings (MESH) terms and keywords using Boolean operators, spelling variations, phrase searching, and truncation have been devised to increase sensitivity and ensure satisfactory search retrieval. Two reviewers (K.R. and N.C.), having experience with database searching, with consultation from a Health Sciences Librarian, have pre-test the search strategy in the Medline (OVID) . Once the Medline search is finalised, the search will be subsequently adapted to the syntax and subject headings of the other databases. Finally, a final hand search of the reference lists of relevant studies that match inclusion criteria and previously published systematic reviews will be conducted to identify further eligible studies. The following electronic databases will be searched, without any restriction on publication date, type, language, or region: Medline (OVID), The Cochrane Library, Embase, Web of Science, Google Scholar and OT Seeker. For this review, articles will be included with no restriction on language as any relevant non-English articles will be attempted to be translated into English. Studies identified by electronic databases and study citations from screening will be imported into EndNote X9 (Clarivate Analytics, Philadelphia, PA, USA), and duplicates removed. Following a pilot test, two independent reviewers (K.R. and A.A.) will screen the title and abstracts against the strict inclusion/exclusion criteria, and if unclear, the full text will be retrieved. Articles that meet the inclusion criteria will be retrieved in full and their details imported into the JBI System for the Unified Management, Assessment and Review of Information (JBI SUMARI) . Two reviewers (K.R. and A.A.) will then independently assess the full-text articles and decide whether these meet the eligibility criteria. If required, the study authors will be contacted to seek additional information. Any disagreements that arise between the reviewers at any stage of the selection process will be resolved through discussion including a third reviewer (R.C.). If multiple reports are published from a single study, these will be linked together. Throughout this process, all reasons for exclusion of papers at the full text review stage will be recorded. The results of the study selection process will be presented in a PRISMA flow diagram . Two reviewers (K.R. and R.C.) will independently assess the methodological quality of each included study in this review. The Cochrane tool for assessing risk of bias in randomised trials (RoB-2 tool for cross-over trails) will be used . Any disagreements that arise between the reviewers will be resolved through discussion and include a third reviewer (A.A.). Authors of papers will be contacted to request missing or additional data for clarification, where required. If a response is not received after two contact attempts, we will assess the study based on its available information. The level of risk of bias in each of these domains will be presented separately for each study in tables, figures and contextualised in a descriptive format in the final review publication. This narrative will outline the methodological issues and how these influenced the interpretation of the results. All studies included in this review will undergo data extraction and synthesis irrespective of the results of their methodological quality. A standardised data extraction form has been developed and will be pilot tested (on one study), and subsequently refined to ensure that we capture all relevant data (see ; , and ). A calibration exercise will be performed to ensure consistency across reviewers. Two review authors (K.R. and A.A.) will independently extract data, discrepancies will be identified and resolved through discussion with a third author (N.C.) where necessary. Authors of papers will be contacted to request missing or additional data, where required. If a response is not received after two contact attempts, we will assess the study based on its available information. The data extracted will be entered into an excel sheet including specific details about the study: article details, participant characteristics, intervention description, outcome measures and funding. If possible, individual studies will be pooled for a statistical meta-analysis using STATA v15 (StataCorp, College Station, TX, USA) . An assessment of the studies’ suitability for pooled analysis will be made following the data extraction process. Odds ratios (for dichotomous data) or weighted (or standardised) final post-intervention mean differences (for continuous data) will be used to calculate effect sizes, and their 95% confidence intervals will be calculated for analysis. The degree of statistical heterogeneity will be assessed using standard I-squared and Chi-squared statistics . Some degree of heterogeneity is expected across the studies; therefore, the random effects model for meta-analysis will be applied . Subgroup analyses will be conducted where there are sufficient data (if over 10 studies) based on age, diagnosis, and severity. Sensitivity analyses will be conducted to test the impact of risk of bias of included studies on outcomes. Where statistical pooling is not possible and/or there is substantial heterogeneity, a narrative synthesis of the study findings including tables and figures will be provided. Publication bias will be assessed using a funnel plot if there are 10 or more studies included in a meta-analysis . Statistical tests for funnel plot asymmetry (Egger test, Begg test, Harbord test) will be performed where appropriate. This systematic review aims to assess the effectiveness of SADE to reduce dental anxiety and corresponding negative behaviours and psychophysiology responses in children and young people with IDD. The findings from this review may widen the scope of occupational therapy to use an ecological and sensory lens in collaboration with dentists. This review may be essential to increase evidence-based practice and synthesise evidence available to influence greater oral health care outcomes for IDD population.
Perspectives on training quantitative systems pharmacologists
624da0e8-583c-4501-9810-39bb06e963bf
9197534
Pharmacology[mh]
The ultimate goal of QSP in pharmaceutical development is to influence critical decisions. This goal is achieved by providing clear, actionable, and understandable quantitative data‐driven conclusions and, importantly, communicating and advocating for the implications of those conclusions in the industry. Starting from the need to generate conclusions, we first examine the practical toolbox of a QSP scientist (Figure ). The most frequently used tool of a QSP scientist is mechanism‐based, mathematical modeling. The developed models of systems physiology and systems pathology allow us to deeply understand how different parts of the body coordinate to properly function (physiology) as well as how the parts that dysfunction result in disease (pathology). In addition, a QSP scientist should also have a solid grasp in the principles of clinical pharmacology and pharmacokinetics (PK), including being able to understand and predict drug pharmacology using traditional, PK/pharmacodynamics‐type modeling. At the end of the day, a QSP scientist should be able to integrate these different types of models together to interpret the efficacy and toxicity of drugs and provide insights on how treatments can be optimized. Data, both preclinical and clinical, play an essential role in the daily work of a QSP scientist. Hence it is not surprising that a QSP scientist would need some data‐handling tools in the toolbox. Mechanistic models need to be compared with observed data for parameter estimation, calibration, and validation. Furthermore, when the mechanism is not clear, or the data are too sparse to support the development of a mechanistic model, data‐driven approaches such as machine learning , and time series analysis would be useful to extract the patterns from the data. Mechanistic modeling and data‐driven modeling study the systems from two different points of view. On one hand, mechanistic models derive the emergent behaviors of complex, dynamical systems with nonlinear interaction between their parts; hence they are also referred to as the bottom‐up approach. On the other hand, data‐driven models build models that relate a set of inputs to a set of outputs; these are also referred to as the top‐down approach. We believe that a QSP scientist should have a good understanding of the power and limitation of both tools so that suitable tools can be chosen, modified, and combined to best solve the biomedical challenges at hand. To deeply understand how these tools work and where their boundaries are, a QSP scientist would need a solid theoretical foundation in applied mathematics (e.g., mathematical sciences, physics, theoretical chemistry, computer science). This ensures that the scientist understands how different biological and physiological process should be described using proper mathematical formalism and how correct numerical analysis should be applied to analyze the system. For example, a good understanding of differential equations and numerical integration would help avoid numerical errors when integrating stiff dynamical systems, experience with nonlinear dynamical system theory would help with grasping the qualitative boundaries of complex biological models, and experience with various optimization algorithms allow the QSP scientist to increase the computational efficiency without impairing the solidness of the results. Proficiency with coding and the relevant software packages is essential for the practical work of a QSP scientist. In addition, the ability to develop code empowers a QSP scientist with more flexibility to develop novel tools and should be encouraged. In lieu of such expertise, many excellent QSP scientists have been well trained in domain expertise with critical mechanistic thinking and focus on key, disease‐specific questions, coupled with training on at least one computational platform. Deep familiarity with one platform can often be leveraged to more easily use new tools. The ability to work with several different tools and programming languages allows a QSP scientist to cross‐check the computational results across different platforms so that the risk of computational errors can be minimized. The toolbox is by no means universal or standard, and different QSP researchers would likely add personalized tools based on training background and working experience. A QSP scientist often must update or modify the computational toolbox to fit the need of biomedical teams and to effectively use the available knowledge and data so that the delivered work could add value of risk reduction and efficacy increase. Given this, the ability to adapt quickly and learn fast is essential for a QSP scientist at work. In this regard, we should also train the QSP scientists to master useful soft skills in addition to computational skills. Pharmacometrics departments reflect considerable diversity of educational backgrounds, so a QSP scientist often needs to work with experts from very different backgrounds, and it is furthermore essential for a QSP scientist to communicate well with experts from diverse areas. Successful communication often requires the QSP scientist to demonstrate both technical excellence (i.e., why a certain algorithm was used and how it is implemented to eliminate potential computational error) and biomedical impact. For instance, rather than elaborating on the technical details, the QSP scientist should communicate something such as “compared with the current practice, incorporation of the modeling and analysis could help reduce the needed time by 21%~25%” for nonmodeling decision makers. Trainees should be provided with experience in developing research relationships with a variety of scientists from disparate fields. To effectively communicate impact, a QSP scientist often needs to learn more than modeling and its analysis. For example, to compute the time reduction, it is essential to understand what the current process is and how the time is distributed. In this regard, a healthy amount of curiosity on how the data were collected, what the current solutions are, and how the computational results would be used in the overall pipeline would help the QSP scientist to deliver impactful results. There is considerable debate surrounding the degree to which QSP scientists need to be a subject matter expert in the therapeutic area they are in. Certainly, understanding both the modeling and the biology gives the QSP scientists more credibility on the teams to which they are assigned. However, given the rapid changing rate of technology and the complexity of real‐world needs, a QSP scientist can expect to dive into new areas frequently and cope with novel challenges routinely. Hence, a QSP scientist could be working in immunology one moment and then diabetes the next. The argument can be made that QSP scientists need a basic understanding of biology so that they can effectively interact with other team members who are indeed subject matter experts to develop their models. Most QSP scientists often gain knowledge from different disciplines and should be familiar with the practice of stepping out their comfort zones. Interdisciplinary programs should nurture such courage and open mindedness so that the trainees continue to grow during their career. The pharmaceutical industry has well recognized the need for better education of QSP scientists. As educators working in academic and industrial institutions, we believe that updated education strategies and their continuous improvement are essential to meet such needs. In the S1, we elaborate currently available resource and methods for training. Because it would be impossible for either the academic or industrial field to carry out the education tasks alone, we hope that more academic and industrial educators can join forces in training our next‐generation quantitative systems pharmacologists (Figure ): the academic community integrates both hard skills (computation, math, programming, etc.) and soft skills (communication, team work, etc.) in the training curriculum, and the industrial community contributes by providing internship opportunities as well as feedback on what skills are most used in the actual work. We envision that a synergistic cycle will help fulfill the tremendous potential of QSP modeling by expanding the pool of next‐generation QSP scientists who are proficient with different tools and are able to deliver impactful results. We hope this perspective is a call to the QSP community for an intentional approach to developing this synergistic cycle and the relevant best practices. The authors declared no competing interests for this work. No funding was received for this work. Supplementary Text S1 Click here for additional data file.
Recommendations by the European Network of Paediatric Research at the European Medicines Agency (Enpr-EMA) Working Group on preparedness of clinical trials about paediatric medicines process
c92413a7-fae6-4057-adf5-1e6ed4ee23af
8666697
Pediatrics[mh]
Children deserve to be treated with high-quality medicines based on robust scientific information. Despite many improvements, including the introduction of new regulations, the availability of medicines for children is suboptimal because of the lack of relevant clinical trials due to the difficulties in implementing and conducting these trials. The number of eligible paediatric patients is often limited, and this requires particular attention to trial design. Patients and their parents may be reluctant to enrol into a trial for many reasons. Research sites often overestimate what is possible both in terms of recruitment and efforts needed to follow trials with an enhanced level of attention for the safety of a vulnerable population. Sites and other groups often underestimate the efforts required to run a trial (resources and time) and the burden of a trial experienced by study participants and their families. Drug companies, regulators and ethics committees can have different views about what should be done during drug development. Insufficient consideration of these complexities at the planning stage of a trial leads to delays in the delivery of trial results or sometimes even failure of the trial with potential loss of new therapy opportunities for the paediatric population. Experience suggests that some of these difficulties can be addressed before a trial opens. Discussion in 2016 between stakeholders under the remit of the European Network of Paediatric Research at the European Medicines Agency (Enpr-EMA) suggested that a shared framework for preparing trials is needed. Accordingly, a document was developed by a Working Group of Enpr-EMA which sets out recommendations for discussions about trial preparedness in paediatrics. We define trial preparedness as a structured assessment of the key factors that could increase the likelihood of a smooth and timely course of a paediatric clinical trial, integrating information from multiple stakeholders on what is possible within individual studies and therefore also for the overall drug development plan within which a trial is embedded. Trial ‘feasibility’ is the likelihood of completing a trial in a timely manner. This document moves beyond the definition of feasibility to present a global determination of all aspects of a trial that need to be prepared. One significant factor of preparedness is the study design, but this is not the only one. By design, we mean the selection of methods to answer a research question. When working with the paediatric population, it is essential to establish explicitly the rationale of the benefit of the research question for children. In parallel, trial design needs to take account of the specificities of neonates, infants, children and young people while maximising the use of extant data (including preclinical data such as toxicity) and minimising the burden of research in these populations. Many additional factors may play a role in designing a trial, such as the target disease, the available data and the phase of development, and most of them cannot be easily standardised within a guideline document. Thus, trial design is not discussed further in the document. The recommendations in the Enpr-EMA document target companies and other organisations responsible for organising trials (sponsors) as well as people whose work includes the preparation of trials or the review trials before they open. The recommendations will be relevant to investigators and people with a range of functions including clinical trial operations, clinical staff who work on trials (doctors, nurses and pharmacists) and administrators. In addition, the document will be relevant to people who review clinical trials in companies, academic and clinical institutions, patient advocacy groups, regulators, research ethics committees and research infrastructures. However, it should be noted that the document does not describe all aspects of ‘sponsor readiness’, such as operational aspects within sponsors and intermediary organisations, for example, contract research organisations (CRO), or strategic factors, such as patient need and economic opportunities. Furthermore, activities relevant to development of age-appropriate formulations to pharmaceutical quality standards, as well as activities to support marketing of products, are important factors influencing paediatric trials but are out of the scope of the document. In order to support the development of these recommendations, the group collated together all existing resources, such as current regulatory guidance, outputs from previous initiatives and Enpr-EMA working groups, and published literature. In addition, the team sought to collect the experience and suggestions from different stakeholders by developing a survey and performing direct interviews. A wide range of stakeholders, including sponsors, investigators, patient organisations, regulators and paediatric clinical research networks, was included to provide the broadest spectrum of knowledge and experience. The stakeholders answered an extensive questionnaire covering questions about four different areas of the planning and conduction of a paediatric clinical trial: planning phase, preparation of the study, study conduct and the poststudy aspects. Finally, an adapted version of the survey was shared with young people’s advisory groups. The key messages identified during these surveys and interviews have been included in the main body of the document and supported the conclusions of the proposed document. A detailed publication of these research findings is under preparation. Recommendations and principles of good preparation For the majority of paediatric clinical trials, problems can be addressed by using all available data to estimate what is possible using a structured approach. Adequate preparation, however, cannot remove all of the difficulties or estimate achieved patient numbers with complete accuracy. However, a well-prepared, well-designed trial is likely to require fewer changes during its course, be run in a shorter time frame and achieve expected objectives with the forecasted costs. Trial preparation should be initiated before, and conducted in parallel to, the designing of the development plan and the individual trials, and also to sponsor readiness. Then trial conduct is often improved iteratively, by learning during the execution of a trial. Planning any trial starts from considering whether there is sufficient scientific rationale and a real clinical need to answer a specific research question. It remains critical to define clearly a meaningful trial objective and whether this addresses a relevant paediatric unmet need. To this aim, the perspective of children and families, and of patient advocacy groups, can make significant contributions about the design and the organisation of the trial. This perspective is critical in developing a successful trial as it can have a direct impact on recruitment and the feasibility of a study. Very often, limited resources are allocated in this early stage to support trial preparation because of the uncertainties on the effective execution of the trial, but remuneration for well-conducted preparedness activities may represent a worthwhile investment to facilitate the high-quality conduct of trials (including recruitment figures and complete data sets), thereby avoiding expenditure for poorly conducted, inconclusive trials and development plans. The principles of good preparation are described in the full guidance document ( https://www.ema.europa.eu/en/documents/other/preparedness-medicines-clinical-trials-paediatrics-recommendations-enpr-ema-working-group-trial_en.pdf ) and are summarised in . Box 1 Summary principles of good preparation (for full list please refer to full guidance document) Principles Develop a time-sensitive understanding of the context for planning of the trial (how many sites (with facilities required by the trial), how many participants at each site and costs of the trial). Contributions to preparedness can be data, estimates, judgements or opinions, but all should be verifiable. Look for sources of data and state methods used to find information (be aware of potentially different uses of the same terms for conditions and diseases in different contexts) including Literature data on disease prevalence (including reviews, case reports and disease registries as applicable). Preclinical evidence. Population-based registry. Patient registry. Drug registry. Real-life data repositories, electronic health records. Site data. Paediatric research networks or initiatives. Use opinion from experts based on experience, including nurses, study coordinators and physicians from all sites, not just large teaching hospitals. Supplement this with a small number of opinion leaders. Take into account the available data on the natural history (including prognosis) of the condition and relevant subsets of the condition under investigation when assessing the number, location and readiness of potential participants. Develop awareness of other trials that may lead to competition for resources or recruits, or opportunities for coenrolment. Take account of clinical reality across all trial sites. Variation in clinical practices across countries and between therapeutic areas should be considered. Identify the factors that are critical to the quality of the trial and the risks that threaten the integrity of these critical factors. Identify ethical and legal issues of the research and responses to potential questions/objections (eg, direct benefit, risk minimisation, child assent and confidentiality), taking account of differences across regions. Ensure appropriate development and availability of age-appropriate formulations based on target populations. Account for the social–economic status of the research locations, particularly for international trials. Focus on the burden on participants and their families. Attendance at trial visits for the child, such as time, inconvenience, impact on school and leisure activities (when possible, use of available technology by participants at home). Parent/caregivers’ burdens of a child’s participation in a trial, including effects on work and the possibility to reimburse costs. Clinical burden on patient on top of standard treatment (eg, blood sampling) Carefully plan the time course of the trial. Do not assume a linear rate of recruitment, particularly at site opening. Consider need to gather data that supports health technology assessment and reimbursement decision, integrated with, or in parallel to, clinical development. Involve sites and networks (including clinical and methodological expert groups), patients and patient advocacy groups to promote the quality of protocol and process design (including information leaflet and consent forms). Seek regulatory input as early as possible (eg, on trial design, the need for age-appropriate formulations, on preclinical trials or other regulatory requirements). Aim for global alignment of the contributions to preparedness, but equally the contributions should reflect the diversity needed in elements of trial preparedness. Conduct clinical trial simulations: in silico and in clinical simulation facilities. Update preparedness work in case of significant delay or interfering event that may have affected the relevance of the previous simulation. Justify why the sample size required by the trial design is compatible with the number of participants that can realistically be expected to be recruited to the trial. In any case, other innovative methods should be explored to facilitate the generation of data in the most efficient way. Expend adequate effort on preparation that is proportionate to its benefits. Establish good communication between all parties involved, including investigators, patient organisations and experts in the disease as well as regulators early during the planning of the trial. Structured justification of preparation The description of preparedness needs to be based on explicit data sources and explicit reasoning and can be modified iteratively. See for a possible structure for a justification of preparation. Box 2 Exemplar of a structured outline of preparedness assessment Statement of starting point: therapeutic need, clinical indication, development and availability of suitable age-appropriate dosage form(s), aim of plan/trial including regulatory purpose and scope of information needs. Availability of participants. Patient flow diagram annotated with sources of information and estimates of variation, particularly at key decision points. Sensitivity analysis of patient availability. Sites. Availability of suitable sites with relevant expertise in clinical research. Extent of modifications needed to sites. Estimates of participants at each site that can be validated. Account for other competing trials. Availability of human resources at site level. Completeness of data. Retention of participants, based on acceptability of key trial assessments including what the expected retention is anticipated to be. Sensitivity analysis of data completeness. Implications. Trade-off between need for information and availability of participants. Areas of concern, anticipated weak links in the preparation. Uncertainties in assumptions being made. Actions required to optimise setup and conduct. Actions required to maximise recruitment and retention. As new information becomes available, the trial plan should be updated and the implications on the development plan reassessed. This could allow evidence and data-driven discussions between sponsors and regulators and will contribute to the development of realistic expectations and reduce the risk of infeasible trials. Any assumptions on the number of children available for recruitment should be made explicit and justified. The construction of a flow diagram in terms of patient availability is critical to define the real target population, from epidemiology to eligibility and from eligibility to contents of locked database. It is important to ensure all assumptions made during the construction of a flow diagram are explicit. Trials that may be used for regulatory purposes, that is, a paediatric label or for authorising a paediatric indication, need to enrol a patient population that will demonstrate an effect of the medicine while determining an acceptable safety profile. For these reasons, clinical trials in rare diseases need to draw on a global patient population to overcome small patient pools in any given country. Site contributions to preparedness Sites and networks of sites should be involved as early as possible in those aspects of trial preparation that they can contribute to. This work is separate from clinical work, and the sponsor should derive high value from the work of the sites, taking into consideration potential conflicts of interest. Early consultation aimed to prevent unfeasible procedures or work flows that are incompatible with standard care is critical as well as ongoing dialogues to ensure any changes to the study are accommodated by all relevant vendors. Sites play a key role in identifying local specificities, which directly impact on the design of the trial (ie, standard of care and vaccination schedule). Roles and responsibilities need to be clearly defined, and the organisation of the sites has to meet industry and regulatory standards. It is extremely important that sites contribute to preparation for individual clinical trials by working on organisational aspects that are common to all clinical trials, such as monitoring organisation, facilities, personnel availability, and clear definition of roles and responsibilities. This generic work would facilitate site assessment of specific clinical trials. Participant contributions to preparedness The perspectives of potential participants are central to the preparation of trials. Children and young people have specific needs and views that have complex dynamics during acute and chronic illness. Early consultation with patients’ and children’s advocacy groups, ideally consultation with patient/parent/caregivers panels and community advisory boards, will improve the communication with the target population and allow the identification of potential practical barriers for the conduct of the trial. Their input should be heeded as far as possible and should include but not only limited to relevant endpoints, time of assessments, quality or life effects, tolerance of tests and assessments, impact on their daily life and family dynamics. Protocols should be made flexible enough to reflect this input as much as possible. Of course, these decisions have to be put in the frame of scientific and regulatory acceptability; therefore, it may be beneficial to obtain also regulatory input on the impact that these changes may have on the objectives and results of the research. Presenting planned trials at meetings of patients’ associations, holding webinars and/or planning for periodic newsletters dedicated to patients during and after the trial, and lay summaries of trial results should be considered good practices for a full involvement of the families participating in the clinical trials, enhancing their role as participant and not only passive recipient of an experimental therapy. Support from patients’ associations before, during and after the trial can also be helpful for trial participants and can increase participant retention in compliance with local regulations. Feedback to children, young people and families who contribute to trial preparation is essential, and sponsors need to plan how and when to provide this. Advocacy groups can contribute to the preparation of plans and trials with Training of people who supply their contributions. Considering relevant endpoints, including, where possible, biomarkers and validated scales, time points of assessment and quality of life effects. Communication with patient community and awareness on new drug development (including age-appropriate dosage form when applicable). Review and contribution to creation of some trial-related documents (eg, consent/assent, information/awareness documents and lay summaries). Implications for sponsors Trial sponsors need to think ahead to include work on preparedness in all processes. CROs and other external vendors (such as data coordination centres, central laboratories, biobanks and drug suppliers) should be included in the assessment of preparedness. Sponsors should anticipate, allocate, deploy and expend relevant resources to meet the needs of good preparedness. Industry should design adult programmes and trials to inform paediatric programmes and trials as appropriate. Cultivating relevant contacts in advance on the capabilities of paediatric clinical research networks means that questions can be posed rapidly. Standing arrangements with sites (confidential disclosure agreements) will facilitate timely work on preparedness. Clinical research networks can support preparedness by providing consistent relationships with a range of sites and rapid dissemination of requests for information and the collation of responses. Feedback to sites is valuable for them and to build relationships. When risks and hurdles are identified by sites during trial preparation, sponsors should not underestimate the risks and hurdles as they might reappear at a later stage of trial conduct, likely to be then a major constraint in the conduct of the trial. Many clinical trials are or may become part of a regulatory drug development plan, such as a paediatric investigation plan (PIP) in the EU or UK, or a paediatric study plan (PSP) in the USA. When planning these studies, it is beneficial for sponsors to obtain regulatory input early on, and to keep an open dialogue on preparedness considerations. This can be done in the context of a PIP/PSP submission, and also other regulatory interactions (eg, scientific advice). Improving the context for trial preparedness Other actions are needed beyond the preparation of individual trials. In order to improve the landscape for medicines research, the paediatric community (clinicians, patients and families, regulators, ethicists, sponsors and CROs) needs to Develop strategies to improve site selection. Continue to undertake collaborative and constructive dialogue between patients’ representatives, academics, industry and regulators to facilitate and accelerate treatment development for paediatric diseases, including rare diseases. Tackle critical trial practicalities such as location of sites and travelling costs for participants and other ways of minimising the burden of research (such as virtual or home clinical trial visits and wearable technology). Collect data that can be used to support and improve future trial preparation, including systematic collection of feedback from all involved (patients’ representatives, researchers and academics, industry and regulators) to facilitate a culture of lessons learnt. Lobby for greater recognition of the importance of research and readiness to participate in research among healthcare professionals and across society. Public and professional awareness around clinical trials needs to be improved, especially for paediatric trials. Disseminate good practice across paediatric clinical research networks. Since the patients are often minors, there needs to be sensitivity to privacy, parental consent and data protection with such communication efforts. Consider efficient, patient-focused trial designs and identify how global regulatory requirements have implications for preparation. Promote transparency about results and preparation. Develop understanding of the natural history and pathophysiology of conditions that can inform the definition of pharmaceutical targets. There is a continued need for improved, mutual understanding of paediatric trial requirements and challenges across the regulatory network, companies, researchers and ethics committees as well as the public. This understanding is needed for efficient operations. More fundamentally, it is essential to respect the views and rights of participants in clinical trials. The intrinsic importance of this respect is supplemented by the need to retain trust among participants and the wider community. Working with children, young people and their parents is the best way to approach these issues. These issues that are not specific to individual products need a generic, precompetitive approach with contributions from multiple stakeholders. Research networks are well placed to support these broader issues, including Enpr-EMA. For the majority of paediatric clinical trials, problems can be addressed by using all available data to estimate what is possible using a structured approach. Adequate preparation, however, cannot remove all of the difficulties or estimate achieved patient numbers with complete accuracy. However, a well-prepared, well-designed trial is likely to require fewer changes during its course, be run in a shorter time frame and achieve expected objectives with the forecasted costs. Trial preparation should be initiated before, and conducted in parallel to, the designing of the development plan and the individual trials, and also to sponsor readiness. Then trial conduct is often improved iteratively, by learning during the execution of a trial. Planning any trial starts from considering whether there is sufficient scientific rationale and a real clinical need to answer a specific research question. It remains critical to define clearly a meaningful trial objective and whether this addresses a relevant paediatric unmet need. To this aim, the perspective of children and families, and of patient advocacy groups, can make significant contributions about the design and the organisation of the trial. This perspective is critical in developing a successful trial as it can have a direct impact on recruitment and the feasibility of a study. Very often, limited resources are allocated in this early stage to support trial preparation because of the uncertainties on the effective execution of the trial, but remuneration for well-conducted preparedness activities may represent a worthwhile investment to facilitate the high-quality conduct of trials (including recruitment figures and complete data sets), thereby avoiding expenditure for poorly conducted, inconclusive trials and development plans. The principles of good preparation are described in the full guidance document ( https://www.ema.europa.eu/en/documents/other/preparedness-medicines-clinical-trials-paediatrics-recommendations-enpr-ema-working-group-trial_en.pdf ) and are summarised in . Box 1 Summary principles of good preparation (for full list please refer to full guidance document) Principles Develop a time-sensitive understanding of the context for planning of the trial (how many sites (with facilities required by the trial), how many participants at each site and costs of the trial). Contributions to preparedness can be data, estimates, judgements or opinions, but all should be verifiable. Look for sources of data and state methods used to find information (be aware of potentially different uses of the same terms for conditions and diseases in different contexts) including Literature data on disease prevalence (including reviews, case reports and disease registries as applicable). Preclinical evidence. Population-based registry. Patient registry. Drug registry. Real-life data repositories, electronic health records. Site data. Paediatric research networks or initiatives. Use opinion from experts based on experience, including nurses, study coordinators and physicians from all sites, not just large teaching hospitals. Supplement this with a small number of opinion leaders. Take into account the available data on the natural history (including prognosis) of the condition and relevant subsets of the condition under investigation when assessing the number, location and readiness of potential participants. Develop awareness of other trials that may lead to competition for resources or recruits, or opportunities for coenrolment. Take account of clinical reality across all trial sites. Variation in clinical practices across countries and between therapeutic areas should be considered. Identify the factors that are critical to the quality of the trial and the risks that threaten the integrity of these critical factors. Identify ethical and legal issues of the research and responses to potential questions/objections (eg, direct benefit, risk minimisation, child assent and confidentiality), taking account of differences across regions. Ensure appropriate development and availability of age-appropriate formulations based on target populations. Account for the social–economic status of the research locations, particularly for international trials. Focus on the burden on participants and their families. Attendance at trial visits for the child, such as time, inconvenience, impact on school and leisure activities (when possible, use of available technology by participants at home). Parent/caregivers’ burdens of a child’s participation in a trial, including effects on work and the possibility to reimburse costs. Clinical burden on patient on top of standard treatment (eg, blood sampling) Carefully plan the time course of the trial. Do not assume a linear rate of recruitment, particularly at site opening. Consider need to gather data that supports health technology assessment and reimbursement decision, integrated with, or in parallel to, clinical development. Involve sites and networks (including clinical and methodological expert groups), patients and patient advocacy groups to promote the quality of protocol and process design (including information leaflet and consent forms). Seek regulatory input as early as possible (eg, on trial design, the need for age-appropriate formulations, on preclinical trials or other regulatory requirements). Aim for global alignment of the contributions to preparedness, but equally the contributions should reflect the diversity needed in elements of trial preparedness. Conduct clinical trial simulations: in silico and in clinical simulation facilities. Update preparedness work in case of significant delay or interfering event that may have affected the relevance of the previous simulation. Justify why the sample size required by the trial design is compatible with the number of participants that can realistically be expected to be recruited to the trial. In any case, other innovative methods should be explored to facilitate the generation of data in the most efficient way. Expend adequate effort on preparation that is proportionate to its benefits. Establish good communication between all parties involved, including investigators, patient organisations and experts in the disease as well as regulators early during the planning of the trial. Develop a time-sensitive understanding of the context for planning of the trial (how many sites (with facilities required by the trial), how many participants at each site and costs of the trial). Contributions to preparedness can be data, estimates, judgements or opinions, but all should be verifiable. Look for sources of data and state methods used to find information (be aware of potentially different uses of the same terms for conditions and diseases in different contexts) including Literature data on disease prevalence (including reviews, case reports and disease registries as applicable). Preclinical evidence. Population-based registry. Patient registry. Drug registry. Real-life data repositories, electronic health records. Site data. Paediatric research networks or initiatives. Use opinion from experts based on experience, including nurses, study coordinators and physicians from all sites, not just large teaching hospitals. Supplement this with a small number of opinion leaders. Take into account the available data on the natural history (including prognosis) of the condition and relevant subsets of the condition under investigation when assessing the number, location and readiness of potential participants. Develop awareness of other trials that may lead to competition for resources or recruits, or opportunities for coenrolment. Take account of clinical reality across all trial sites. Variation in clinical practices across countries and between therapeutic areas should be considered. Identify the factors that are critical to the quality of the trial and the risks that threaten the integrity of these critical factors. Identify ethical and legal issues of the research and responses to potential questions/objections (eg, direct benefit, risk minimisation, child assent and confidentiality), taking account of differences across regions. Ensure appropriate development and availability of age-appropriate formulations based on target populations. Account for the social–economic status of the research locations, particularly for international trials. Focus on the burden on participants and their families. Attendance at trial visits for the child, such as time, inconvenience, impact on school and leisure activities (when possible, use of available technology by participants at home). Parent/caregivers’ burdens of a child’s participation in a trial, including effects on work and the possibility to reimburse costs. Clinical burden on patient on top of standard treatment (eg, blood sampling) Carefully plan the time course of the trial. Do not assume a linear rate of recruitment, particularly at site opening. Consider need to gather data that supports health technology assessment and reimbursement decision, integrated with, or in parallel to, clinical development. Involve sites and networks (including clinical and methodological expert groups), patients and patient advocacy groups to promote the quality of protocol and process design (including information leaflet and consent forms). Seek regulatory input as early as possible (eg, on trial design, the need for age-appropriate formulations, on preclinical trials or other regulatory requirements). Aim for global alignment of the contributions to preparedness, but equally the contributions should reflect the diversity needed in elements of trial preparedness. Conduct clinical trial simulations: in silico and in clinical simulation facilities. Update preparedness work in case of significant delay or interfering event that may have affected the relevance of the previous simulation. Justify why the sample size required by the trial design is compatible with the number of participants that can realistically be expected to be recruited to the trial. In any case, other innovative methods should be explored to facilitate the generation of data in the most efficient way. Expend adequate effort on preparation that is proportionate to its benefits. Establish good communication between all parties involved, including investigators, patient organisations and experts in the disease as well as regulators early during the planning of the trial. The description of preparedness needs to be based on explicit data sources and explicit reasoning and can be modified iteratively. See for a possible structure for a justification of preparation. Box 2 Exemplar of a structured outline of preparedness assessment Statement of starting point: therapeutic need, clinical indication, development and availability of suitable age-appropriate dosage form(s), aim of plan/trial including regulatory purpose and scope of information needs. Availability of participants. Patient flow diagram annotated with sources of information and estimates of variation, particularly at key decision points. Sensitivity analysis of patient availability. Sites. Availability of suitable sites with relevant expertise in clinical research. Extent of modifications needed to sites. Estimates of participants at each site that can be validated. Account for other competing trials. Availability of human resources at site level. Completeness of data. Retention of participants, based on acceptability of key trial assessments including what the expected retention is anticipated to be. Sensitivity analysis of data completeness. Implications. Trade-off between need for information and availability of participants. Areas of concern, anticipated weak links in the preparation. Uncertainties in assumptions being made. Actions required to optimise setup and conduct. Actions required to maximise recruitment and retention. As new information becomes available, the trial plan should be updated and the implications on the development plan reassessed. This could allow evidence and data-driven discussions between sponsors and regulators and will contribute to the development of realistic expectations and reduce the risk of infeasible trials. Any assumptions on the number of children available for recruitment should be made explicit and justified. The construction of a flow diagram in terms of patient availability is critical to define the real target population, from epidemiology to eligibility and from eligibility to contents of locked database. It is important to ensure all assumptions made during the construction of a flow diagram are explicit. Trials that may be used for regulatory purposes, that is, a paediatric label or for authorising a paediatric indication, need to enrol a patient population that will demonstrate an effect of the medicine while determining an acceptable safety profile. For these reasons, clinical trials in rare diseases need to draw on a global patient population to overcome small patient pools in any given country. Sites and networks of sites should be involved as early as possible in those aspects of trial preparation that they can contribute to. This work is separate from clinical work, and the sponsor should derive high value from the work of the sites, taking into consideration potential conflicts of interest. Early consultation aimed to prevent unfeasible procedures or work flows that are incompatible with standard care is critical as well as ongoing dialogues to ensure any changes to the study are accommodated by all relevant vendors. Sites play a key role in identifying local specificities, which directly impact on the design of the trial (ie, standard of care and vaccination schedule). Roles and responsibilities need to be clearly defined, and the organisation of the sites has to meet industry and regulatory standards. It is extremely important that sites contribute to preparation for individual clinical trials by working on organisational aspects that are common to all clinical trials, such as monitoring organisation, facilities, personnel availability, and clear definition of roles and responsibilities. This generic work would facilitate site assessment of specific clinical trials. The perspectives of potential participants are central to the preparation of trials. Children and young people have specific needs and views that have complex dynamics during acute and chronic illness. Early consultation with patients’ and children’s advocacy groups, ideally consultation with patient/parent/caregivers panels and community advisory boards, will improve the communication with the target population and allow the identification of potential practical barriers for the conduct of the trial. Their input should be heeded as far as possible and should include but not only limited to relevant endpoints, time of assessments, quality or life effects, tolerance of tests and assessments, impact on their daily life and family dynamics. Protocols should be made flexible enough to reflect this input as much as possible. Of course, these decisions have to be put in the frame of scientific and regulatory acceptability; therefore, it may be beneficial to obtain also regulatory input on the impact that these changes may have on the objectives and results of the research. Presenting planned trials at meetings of patients’ associations, holding webinars and/or planning for periodic newsletters dedicated to patients during and after the trial, and lay summaries of trial results should be considered good practices for a full involvement of the families participating in the clinical trials, enhancing their role as participant and not only passive recipient of an experimental therapy. Support from patients’ associations before, during and after the trial can also be helpful for trial participants and can increase participant retention in compliance with local regulations. Feedback to children, young people and families who contribute to trial preparation is essential, and sponsors need to plan how and when to provide this. Advocacy groups can contribute to the preparation of plans and trials with Training of people who supply their contributions. Considering relevant endpoints, including, where possible, biomarkers and validated scales, time points of assessment and quality of life effects. Communication with patient community and awareness on new drug development (including age-appropriate dosage form when applicable). Review and contribution to creation of some trial-related documents (eg, consent/assent, information/awareness documents and lay summaries). Trial sponsors need to think ahead to include work on preparedness in all processes. CROs and other external vendors (such as data coordination centres, central laboratories, biobanks and drug suppliers) should be included in the assessment of preparedness. Sponsors should anticipate, allocate, deploy and expend relevant resources to meet the needs of good preparedness. Industry should design adult programmes and trials to inform paediatric programmes and trials as appropriate. Cultivating relevant contacts in advance on the capabilities of paediatric clinical research networks means that questions can be posed rapidly. Standing arrangements with sites (confidential disclosure agreements) will facilitate timely work on preparedness. Clinical research networks can support preparedness by providing consistent relationships with a range of sites and rapid dissemination of requests for information and the collation of responses. Feedback to sites is valuable for them and to build relationships. When risks and hurdles are identified by sites during trial preparation, sponsors should not underestimate the risks and hurdles as they might reappear at a later stage of trial conduct, likely to be then a major constraint in the conduct of the trial. Many clinical trials are or may become part of a regulatory drug development plan, such as a paediatric investigation plan (PIP) in the EU or UK, or a paediatric study plan (PSP) in the USA. When planning these studies, it is beneficial for sponsors to obtain regulatory input early on, and to keep an open dialogue on preparedness considerations. This can be done in the context of a PIP/PSP submission, and also other regulatory interactions (eg, scientific advice). Other actions are needed beyond the preparation of individual trials. In order to improve the landscape for medicines research, the paediatric community (clinicians, patients and families, regulators, ethicists, sponsors and CROs) needs to Develop strategies to improve site selection. Continue to undertake collaborative and constructive dialogue between patients’ representatives, academics, industry and regulators to facilitate and accelerate treatment development for paediatric diseases, including rare diseases. Tackle critical trial practicalities such as location of sites and travelling costs for participants and other ways of minimising the burden of research (such as virtual or home clinical trial visits and wearable technology). Collect data that can be used to support and improve future trial preparation, including systematic collection of feedback from all involved (patients’ representatives, researchers and academics, industry and regulators) to facilitate a culture of lessons learnt. Lobby for greater recognition of the importance of research and readiness to participate in research among healthcare professionals and across society. Public and professional awareness around clinical trials needs to be improved, especially for paediatric trials. Disseminate good practice across paediatric clinical research networks. Since the patients are often minors, there needs to be sensitivity to privacy, parental consent and data protection with such communication efforts. Consider efficient, patient-focused trial designs and identify how global regulatory requirements have implications for preparation. Promote transparency about results and preparation. Develop understanding of the natural history and pathophysiology of conditions that can inform the definition of pharmaceutical targets. There is a continued need for improved, mutual understanding of paediatric trial requirements and challenges across the regulatory network, companies, researchers and ethics committees as well as the public. This understanding is needed for efficient operations. More fundamentally, it is essential to respect the views and rights of participants in clinical trials. The intrinsic importance of this respect is supplemented by the need to retain trust among participants and the wider community. Working with children, young people and their parents is the best way to approach these issues. These issues that are not specific to individual products need a generic, precompetitive approach with contributions from multiple stakeholders. Research networks are well placed to support these broader issues, including Enpr-EMA. Lack of sufficient information during the preparation of trials often leads to their conduct being ineffective or to even fail. The effect of this inefficiency on the development of new drugs for the paediatric population has an important impact on the management of health of children since they often receive drugs which are not licensed or have even never been tested in their age group, or, even worse, have no available therapies for their disease. To help fill some gaps that contribute to the difficulty in implementing and conducting paediatric clinical trials, we have proposed an approach to collecting relevant information and a format for sharing that information. Everybody can contribute to preparedness in clinical trials, being a shared responsibility among the different players: some of the work to be prepared can be done upfront by the sites establishing their processes and procedures, by the sponsors opening communication channels with regulators, investigators and families, and by the patients organising in advocacy groups who can be representative of their voice also at regulatory level. Then, this work should be done as a collaborative effort exchanging knowledge and information and growing in experience. Preparedness does not apply only to the initial phase of the trial, but should be extended also to all the following phases, including the very last step on how to communicate the outcome of the study for regulatory purposes, for the scientific community and for the benefit of the participants. Explicit sharing of that information and assumptions when information is not available will promote rigorous preparation and facilitate the conduct of feasible and appropriate trials.
Executive Summary of the Early-Onset Breast Cancer Evidence Review Conference
dced17a2-489d-4034-bed2-2c9079a5ee0b
7253192
Gynaecology[mh]
The American College of Obstetricians and Gynecologists convened an expert panel to identify the best evidence and practices from the literature, existing relevant society guidelines, and available validated specific or generalizable clinical tools. The panel was recruited from the Society for Academic Specialists in General Obstetrics and Gynecology to review and summarize the evidence. Panel members were required to have expertise in evidence review and synthesis. Subspecialty expertise in breast disease was also sought. Several of the panel members had completed subspecialty fellowship training in breast disease. The panel developed 10 separate research questions and used the PICO criteria (P=patient, problem, or population; I=intervention; C=comparison, control, or comparator; O=outcome[s]) to frame the literature review. These questions form the organizing basis for this executive summary. Experts in literature searches from the ACOG Resource Center searched the Cochrane Library, MEDLINE through Ovid, and PubMed for references not indexed through MEDLINE from January 2010 to January 2019. Literature was organized by level of evidence. Published guidelines were categorized separately from references. A primary reviewer was assigned to each topic to review titles and abstracts, then the entire manuscript when appropriate. Panel members expanded the search criteria when necessary, either increasing the timeframe or broadening the search to other populations, particularly when inadequate evidence was found on the 18–45 years age group. Reference lists from papers found in the search were also reviewed. Internet searches with standard search engines were performed to seek guidelines, recommendations, and tools that might not have been published in peer-reviewed publications. Relevant information was compiled into an evidence summary template. Completed templates were then reviewed by a secondary reviewer and the primary and secondary reviewer worked together on revisions in response to the secondary reviewer's comments. The American College of Obstetricians and Gynecologists convened the Early-Onset Breast Cancer Evidence Review Conference in Washington, DC, April 1–2, 2019, including the panel members and representatives from stakeholder professional and patient advocacy organizations (Table ). Panel members presented their reviews to the convened group, which discussed each section. Comments from the discussion were integrated into the review summary by the primary reviewer. The revised summaries were sent to a tertiary reviewer for final review, and final revisions were made by the primary reviewer. The final reviews (see Appendices 1–10) were used to develop the educational material. Breast cancer is the most common form of cancer in women and represents the second leading cause of cancer death in women. National Cancer Institute data from 2012 to 2016 indicated that 1.9% of new breast cancer cases and 0.9% of cancer deaths occurred among women aged 20–34 years, and 8.4% of new breast cancer cases and 4.7% of breast cancer deaths occurred among women aged 35–44 years. Black women had the highest death rate at 28.1 per 100,000 persons. Although 5-year relative survival rates were largely similar across age groups, women younger than age 45 years had among the lowest rates, second only to women aged 75 years and older. , See Table 1 in Appendix 1, available online at http://links.lww.com/AOG/B864 , for breast cancer incidence rates by age and race. See Table 2 in Appendix 1 ( http://links.lww.com/AOG/B864 ) for breast cancer mortality rates by age and race. Younger women tend to have more aggressive and biologically unfavorable tumor subtypes than older women and poorer survival in early stage disease (stages I and II) when compared with women older than 40 years. In advanced stages, younger women have lower mortality, likely because of overall general health. Although mortality trends have improved in all women, young black women continue to have higher mortality rates than other young women with breast cancer, irrespective of stage or hormone receptors. Annual hazard rates of death of young black women are improving more slowly than other races and ethnicities, suggesting less benefit from advances in treatment. Poorer prognosis in black women is thought to result from multiple factors, including more aggressive tumors, access barriers, and social determinants of health (see Appendix 1 [ http://links.lww.com/AOG/B864 ] for complete evidence summary). Cancer genes such as autosomal dominant single gene pathogenic variants account for approximately 5–10% of all cases of breast cancer. The BRCA1 and BRCA2 genes are the most common, representing more than 50% of all genes associated with early-onset breast cancer. Women who carry pathogenic variants have an increased lifetime risk of breast and other cancers and are at higher risk of developing early-onset breast cancer. BRCA pathogenic variants occur more frequently in certain populations (Table ), most notably in persons of Ashkenazi Jewish descent. The prevalence of BRCA1 and BRCA2 pathogenic variants is 1 in 40 (2.5%) in Ashkenazi Jews, compared with the general population prevalence of 1 in 400–600. , In Ashkenazi Jews, three site-specific founder mutations have been identified (185delAG and 5382insC in BRCA1 and 6174delT in BRCA2 ), representing more than 90% of the BRCA mutations. In the United States, African American women have a lower incidence of breast cancer than Caucasian women, but higher breast cancer mortality rates. The higher mortality rate seems to be associated with two patterns: proportionally more African American women are diagnosed before 50 years of age (30–40% of all breast cancers in African American women) compared with Caucasian women (approximately 20% of all breast cancer in Caucasian women), and African American women have a twofold higher rate of breast cancers that lack expression of the estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2, known as triple-negative cancer. Triple-negative tumors are biologically more active, with higher recurrence and mortality rates compared with most other breast cancer phenotypes. , These differences do not appear to be due to higher carriage rates of single gene mutations such as BRCA1 and BRCA2 alone. Currently, population-based screening for BRCA genes in the absence of other risk factors is not broadly recommended, given their rarity and the uncertain benefit of large-scale testing. Because Ashkenazi Jews have a 10-fold increased risk of carrying a founder mutation in BRCA1 or BRCA2 , consensus guidelines recommend offering routine testing for the three specific mutations. , The National Comprehensive Cancer Network, ACOG, the U.S. Preventive Services Task Force, the American Society of Breast Surgeons, and the American College of Medical Genetics provide recommendations for risk assessment, referral to genetic counseling or offering of genetic testing based on risk identification, and management of men and women identified with a genetic predisposition for early-onset breast cancer (see Table 2 in Appendix 2, available online at http://links.lww.com/AOG/B865 ). Common factors considered in risk assessment include the following: Personal history of breast, ovarian, tubal, pancreatic, prostate, and other cancers and either early age of onset of these cancers or other cancer-specific factors that increase the likelihood of carrying a pathogenic variant in a breast cancer gene (eg, triple-negative tumors). Family history of breast, ovarian, tubal, pancreatic, prostate, and other cancers suggesting an autosomal dominant pattern of inheritance. In addition to BRCA1 and BRCA2 , other important but less common autosomal dominant genes are associated with early-onset breast cancer risk. Panel testing has emerged in the past few years to assess for possible gene alterations that have been implicated in early-onset breast cancer. The specific panels are usually defined by the laboratory offering the testing. A woman identified with a pathogenic variant placing her at increased risk for early-onset breast cancer can undergo increased surveillance to detect breast cancer at earlier stages, risk-reduction surgery, or chemoprophylaxis. Depending on the gene, surveillance may start at an earlier age and include mammography, magnetic resonance imaging (MRI), or both. The natural history of early-onset breast cancer is fairly well understood for some genes (eg, BRCA), but there is less complete understanding of the penetrance and age of onset for those with non- BRCA genes associated with breast cancer. Table provides an overview of common genes included in panel testing, along with recommendations for surveillance and risk reduction (see Appendix 2 [ http://links.lww.com/AOG/B865 ] for complete evidence summary). Assessment of family history is essential when evaluating young women accessing primary care. Understanding a woman's family history of breast cancer can identify individuals at elevated risk for hereditary breast cancer or women who would benefit from increased breast cancer surveillance. The American College of Obstetricians and Gynecologists, the Society of Gynecologic Oncologists, the U.S. Preventive Services Task Force, the National Institute of Health Care Excellence, and the National Comprehensive Cancer Network have published guidelines recommending assessment of family history and screening for patients at increased risk of breast cancer. The American College of Obstetricians and Gynecologists states that screening should include at minimum a personal cancer history and first- and second-degree relatives' cancer history that includes a description of the type of primary cancer, the age of onset, and the lineage of the family member. The National Comprehensive Cancer Network clinical guidelines recommend genetic assessment for all patients with first- and second-degree relatives diagnosed with breast cancer younger than age 50 years. The U.S. Preventive Services Task Force recommends screening of women who have family members with breast, ovarian, tubal, or peritoneal cancer using one of several screening tools designed to identify a family history that may be associated with an increased risk for potentially harmful mutations in breast cancer susceptibility genes ( BRCA1 or BRCA2 ). Women with positive screening results should receive genetic counseling and, if indicated after counseling, BRCA testing. Genetic counselors can help determine which of the many available panels of genetic testing are most appropriate and cost-effective. Women with deleterious genetic mutations tend to present with breast cancer at an earlier age. However, some studies suggest that women with a positive family history and no known genetic mutation are at increased risk of developing breast cancer and these cancers occur at an earlier age compared with those in the general population who did not have a known mutation. – The Nurses' Health Study and a systematic review and meta-analysis by Pharoah et al identified consistent findings. , In the Nurses' Health Study, women with a family member diagnosed with breast cancer before age 50 years had an increased risk for breast cancer compared with women of the same age who had family members diagnosed at older ages. Compared with women with no family history, those whose mother was diagnosed before age 50 years had an adjusted relative risk (RR) of 1.69 (95% CI 1.39–2.05), and those whose mother was diagnosed at 50 or older had an RR of 1.37 (95% CI 1.22–1.53). Pharaoh et al found that a history of breast cancer in at least one first-degree relative resulted in RR estimates ranging from 1.2 to 8.8, with most studies showing RRs between 2 and 3. The pooled risk estimate for having two affected first-degree relatives was 3.6 (95% CI 2.5–5.0). Genetic mutations were not factored out in many of the older studies. There are limited data on outcomes for women with an elevated risk of breast cancer by family history without an established familial genetic mutation. National guidelines consistently emphasize the importance of gathering a thorough family history of breast cancer. However, these guidelines are based on limited data estimating lifetime and age-based breast cancer risk for women in families that do not have identified genetic mutation carriers. Many of the current guidelines are based on expert opinion and studies of family history that were published before the availability of genetic testing for mutations such as BRCA1 and BRCA2 . There is general consensus that women with a lifetime risk of breast cancer greater than 20%, as determined by any model, are at high risk. Multiple validated models can be used to determine the probability of a genetic mutation, which increases the risk of breast cancer. There is no consensus and there are no data to support the recommendation of one model over another. Currently, the National Comprehensive Cancer Network recommends that women with an estimated lifetime risk of breast cancer of 20% or higher, determined by models largely based on family history (eg, Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Claus, BRCAPRO, or Tyrer-Cuzick) should be offered annual mammography screening starting at age 30 years and annual breast screening by MRI starting at age 25 years , (see Appendix 3, available online at http://links.lww.com/AOG/B866 , for complete evidence summary). This is in contrast to screening recommendations for average risk women, which all recommend screening with mammography alone, starting at age 40–50 years, depending on the source. There are at least moderate-quality data that risk assessment, referral for genetic counseling, and genetic testing provide net benefit in women at high risk for early-onset breast cancer. These steps can form the basis for intensive surveillance for early detection or use of risk-reduction methods that have proven effective in detecting breast cancer at an earlier stage and decreasing mortality rates. The National Institutes of Health maintains a periodically updated list of online resources designed to educate and assist health care providers on various topics ranging from basic genetics, understanding risk assessment, criteria for referral to genetic counseling, and interpretation of genetic test results. Other national societies have created genetics “toolkits” or published guidance to educate health care providers on basic cancer genetics, risk assessment, and referral recommendations , (see Table 1 in Appendix 4, available online at http://links.lww.com/AOG/B867 , for a list of useful websites). Providers can also learn about these topics through other mechanisms, such as continuing medical education and online learning. The depth and detail of the material covered range from superficial (eg, short “expert” videos) to online courses that take place over several months. Very few online courses provide a validated assessment of competency or certification. The content of specific training and assessment of competency for physicians who counsel patients about genetic testing have not been standardized. The U.S. Preventive Services Task Force concluded that health care providers should assess risk based on personal or family history and refer women who screen positive to cancer genetic counselors. A number of validated tools exist to determine who should be referred for genetic testing, – and several professional specialty societies have developed lists of indications for referral and testing. , , These tools are specifically designed to evaluate who should be referred for BRCA testing; however, because BRCA carriers represent the greatest proportion of women at genetic risk for early-onset breast cancer, these tools are reasonable proxies for genetic screening for early-onset breast cancer. These tools have been validated in some populations (non-Hispanic white women), and it is not known how the tools perform in nonwhite populations. It remains unclear how frequently these tools are used in practice by physicians. Evaluation suggests that the tools miss a substantial proportion of carriers. , Interpretation of genetic test results can be complex and usually requires a qualified individual who has specific training in cancer genetics. , , A number of tools and calculators are used to estimate lifetime invasive breast cancer risk, but not necessarily the predicted age of onset (See Table 2 in Appendix 4, http://links.lww.com/AOG/B867 , for a comparison of four commonly used risk-assessment models: Tyrer-Cuzick, the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Claus, and the modified Gail model, also called the Breast Cancer Risk Assessment Tool). Numerous national consensus guidelines and recommendations have been developed to assist health care providers in communicating with patients about referral to genetic counseling or testing for early-onset breast cancer genes or both. , , Some specialty societies have produced separate guidance specifically addressing both the interpretation of genetic test results and how to communicate these results to patients. Some guidelines are frequently updated, whereas others are periodically revised (ie, every few years), , , resulting in guidance that may differ, causing confusion among health care providers and patients. All current guidelines recommend that women should be screened for personal and family history of breast and other related cancers and referred for genetic counseling or testing or both as appropriate. In addition, all guidelines recommend that determination for testing and pretest and posttest counseling should be performed by individuals with appropriate training. However, there is a shortage of genetic counselors in the United States, which has been identified as a barrier to effective counseling , (see Appendix 4 [ http://links.lww.com/AOG/B867 ] for complete evidence summary). Breast tissue is comprised of fibroglandular tissue and fat. The fibroglandular tissue is a mixture of fibrous stroma and ductal epithelium and appears denser or brighter on mammography because the X-rays are not able to penetrate at the same rate as fatty tissue. The Breast Imaging-Reporting and Data System for mammography developed by the American College of Radiology includes a subjective assessment of how much fibroglandular tissue is present (see Table 1 in Appendix 5, available online at http://links.lww.com/AOG/B868 ). As women age, breast tissue typically becomes less dense. Most of the data about breast density and cancer risk come from women older than age 50 years. Dense breasts are present in the majority of younger women. A systematic review of risk for breast cancer in women aged 40–49 years reported that extremely dense breasts were associated with an increased risk of breast cancer when compared with breasts with scattered fibroglandular densities (RR 2.04, 95% CI 1.84–2.26). In a more recent case-control study of 213 Korean women with breast cancer, women who had the highest breast density, described as 50% density or higher, after adjusting for multiple variable, had an adjusted odds ratio of 2.98 (95% CI 0.99–9.03) for breast cancer. The wide CIs in this nonsignificant finding is likely related to the small numbers of included women and future studies should be monitored. Median age in the study was 51.5 years, with 45% of cancers diagnosed before age 50 years. Older studies are harder to interpret because they used many different ways of characterizing breast density, but in general, when comparing the most dense with the least dense group, there appears to be an increased risk of breast cancer, with RRs as high as 4.64 (95% CI 3.64–5.91) reported. As the majority of premenopausal women have dense breasts, it is not clear that RRs estimated from comparisons of extremes of breast density categories are appropriate measures of risk in this age group. Dense breasts decrease the sensitivity of mammography because dense breast tissue appears radiopaque, similar to breast cancers, decreasing visual contrast (“masking”). In women with extremely dense breasts, mammography has 62% sensitivity for detection of breast cancer, compared with 88% sensitivity for women with fatty breasts. One way to assess delay in diagnosis is to determine the rate of interval cancers, those cancers found between recommended screening intervals after a normal mammogram. No studies evaluating masking due to breast density have exclusively evaluated women with early-onset breast cancer. Most studies included large proportions of women older than age 50 years, though women aged 40–49 years were included. More recent evidence suggests that dense breasts appear to be associated with at least a twofold increased risk of interval cancers as well as a worse prognosis, including larger tumor size and more node-positive disease. – Studies of adjunctive screening of women with dense breasts with ultrasonography and MRI generally noted higher cancer detection rates and earlier diagnoses, but also showed increase in biopsy for benign lesions and increased healthcare costs, and no study showed improvement in mortality (see Appendix 5 [ http://links.lww.com/AOG/B868 ] for complete evidence summary). The majority of women under age 46 years have dense breasts, so any recommendations for additional screening in this age group would require additional testing in a large number of women whose baseline risk is low. Most organizations, including ACOG and the U.S. Preventive Services Task Force, do not recommend additional screening in women younger than age 46 years with a normal mammogram and dense breasts. The Society of Breast Imaging expresses concern for a delay in diagnosis and later stage at diagnosis of noncalcified breast cancers because of dense breast tissue and suggests that ultrasonography may be of benefit, provided the woman is willing to accept an increased risk of false-positive results. The National Comprehensive Cancer Network recommends that women with mammographically dense breast tissue (heterogeneously or extremely dense tissue) be counseled about the risks and benefits of supplemental screening. Neither of these organizations specifically address dense breasts in younger women. Mandatory breast density reporting has been enacted as legislation in an increasing number of states. Many patients receive letters notifying them of their breast density, and interpretation of these letters can be challenging for patients and health care providers. In early 2019, Congress authorized the U.S. Food and Drug Administration to amend the Mammography Quality Standards Act of 1992 to include mandatory breast density reporting at the federal level. The public comment period for the proposed changes to the legislation ended in June 2019, and final regulations should be forthcoming. The American College of Obstetricians and Gynecologists recommends that health care providers comply with state laws that require disclosure of breast density in mammogram reports. Younger women with dense breasts and no other risk factors can be counseled that dense breasts are very common in this age group, and supplemental screening methods are available. However, they are not specifically recommended, have significant risk of false positives, and have not been shown to change outcome. When mammographic density in combination with other risk factors places the woman at above-average risk, additional screening with ultrasonography may be warranted and a shared decision-making model can be applied. Some breast cancer risk calculators integrate breast density and can be used to assess overall risk in these women (see Appendix 5 [ http://links.lww.com/AOG/B868 ] for complete evidence summary). History of Proliferative Breast Disease Many proliferative breast diseases increase the risk of breast cancer, but the effect on early-onset breast cancer risk is unknown. Atypical ductal hyperplasia carries a more than 20% risk of ductal carcinoma in situ (DCIS) or invasive malignancy at the time of diagnosis, so it is typically excised. Both atypical ductal hyperplasia and atypical lobular hyperplasia are associated with a fourfold increased lifetime risk of breast cancer. – When atypical lobular hyperplasia is an incidental finding and there is concordance between radiologic and pathologic findings regarding the targeted biopsied lesion, it is less likely to be associated with a concurrent malignancy, so close monitoring is usually appropriate. Lobular carcinoma in situ is not considered a preinvasive malignancy like DCIS, but does significantly increase the lifetime risk of breast cancer (RR 6.9–11, absolute risk 7.1% over 10 years). , Pleomorphic lobular carcinoma in situ may increase that risk even further. Radial scars are characterized microscopically by a fibroelastic core with radiating ducts and lobules. Radial scars and complex sclerosing lesions carry an 8–15% risk of DCIS or invasive malignancy at the time of excision. , – Radial scars are usually managed by excisional biopsy. There are limited data with which to determine the optimal screening strategy after atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ. Breast MRI may improve breast cancer detection over mammography alone, but it is associated with more biopsies in this population. The National Comprehensive Cancer Network is the only professional society with screening recommendations for those who have had atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ : Annual mammography (not before age 30 years). Consider tomosynthesis. Consider annual breast MRI (not before age 25 years). Clinical breast examinations every 6–12 months. Engage in breast self-awareness (women should be familiar with their breasts and report changes to their health care provider promptly). Past or Present Use of Hormonal Contraception There have been conflicting data regarding the effect of hormonal contraception on breast cancer risk. A large meta-analysis in 1996 revealed a small increased risk of breast cancer among women with current or recent oral contraceptive use (RR 1.07, SD 0.02, P <.001). Similar findings were noted in a large cohort study in 2017 (RR 1.20, 95% CI 1.14–1.26). The absolute risk was quite small (one additional breast cancer diagnosis for every 7,690 women using hormonal contraception each year). In both studies, breast cancer risk returned to baseline 5–10 years after discontinuing hormonal contraception. , Most studies do not suggest an increased risk of breast cancer among women using a levonorgestrel intrauterine system (IUS) or depo-medroxyprogesterone injections. – There are limited data regarding the etonogestrel implant, but no study to date has demonstrated an increased breast cancer risk. The risks of hormonal contraception must be weighed against the health, social, and economic consequences of unplanned pregnancy, as well as the many noncontraceptive benefits of hormonal contraception. The maternal mortality rate in the United States in 2015 was 26.4 deaths per 100,000 pregnancies, which is comparable with the rate of excess breast cancer diagnoses (13 [95% CI 10–16]/100,000 person years) related to hormonal contraception suggested by the 2017 cohort study. , Hormonal contraception, particularly oral contraceptives, significantly decreases the risk of ovarian and endometrial cancers. , There are no screening guidelines that specifically address exposure to hormonal contraception, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer. Past or Present Use of Fertility Treatments Many fertility treatments cause an increase in circulating estrogen and progesterone levels, which theoretically could increase future breast cancer risk. Most studies have demonstrated no change or a decreased risk of breast cancer after fertility treatments. Few studies specifically evaluated the risk of early-onset breast cancer. Very limited data suggest an increased risk of breast cancer among specific populations, including women exposed to many high-dose cycles of clomiphene citrate and women undergoing in vitro fertilization before age 24 years. , The American Society for Reproductive Medicine states that there is “fair evidence that fertility drugs are not associated with an increased risk of breast cancer (Grade B).” No screening guidelines specifically address fertility treatment exposure, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer. History of Radiation Exposure Chest radiation therapy before age 30 years is a well-established risk factor for early-onset breast cancer. , Treatments of concern include mantle radiation for Hodgkin's lymphoma and moderate-dose chest radiation therapy for non-Hodgkin's lymphoma, leukemia, bone malignancies, or pediatric solid tumors (eg, Wilms tumor, neuroblastoma, and soft-tissue sarcoma). The cumulative incidence of invasive breast cancer in these patients is 13–20% by age 40–45 years, similar to that seen among BRCA1 or BRCA2 mutation carriers. – Risk is greatest among women treated with 40 Gy or more, but all women treated with 20 Gy or more are at increased risk for early-onset breast cancer. , , This increased risk is evident 8–10 years after completion of radiation therapy and does not plateau at any point after treatment. – , Early initiation of breast cancer screening is effective for reducing stage at diagnosis in this population. Both mammography and breast MRI are effective screening studies after chest radiation therapy, but mammography has higher specificity. , – Multiple professional organizations have published screening guidelines for women with a history of chest radiation therapy (Table ). There are limited data to suggest superiority of one screening protocol over others. Shared decision making, including the discussion of risks of false positives and negatives, is recommended when deciding on a screening strategy. Prior Breast or Ovarian Cancer Breast cancer survivors remain at risk for a second breast cancer, but the risk for a second early-onset breast cancer among young breast cancer survivors is unknown. Among survivors of any age without a known cancer gene mutation, the risk of a second breast cancer is approximately 3% and 7% at 10 and 15 years after diagnosis, respectively. There are no data regarding risk of early-onset breast cancer in women with ovarian cancer in childhood, adolescence, or early adulthood. After breast cancer treatment, survivors require clinical and imaging follow-up to assess for recurrence and second malignancies. Both the National Comprehensive Cancer Network and the European Society of Breast Cancer Specialists recommend annual mammograms starting 6–12 months after completion of treatment. , Breast MRI should be considered in patients at high risk for a second cancer (eg, BRCA1 or BRCA2 mutation carriers) , (see Appendix 6 [ http://links.lww.com/AOG/B869 ] for complete evidence summary). Many proliferative breast diseases increase the risk of breast cancer, but the effect on early-onset breast cancer risk is unknown. Atypical ductal hyperplasia carries a more than 20% risk of ductal carcinoma in situ (DCIS) or invasive malignancy at the time of diagnosis, so it is typically excised. Both atypical ductal hyperplasia and atypical lobular hyperplasia are associated with a fourfold increased lifetime risk of breast cancer. – When atypical lobular hyperplasia is an incidental finding and there is concordance between radiologic and pathologic findings regarding the targeted biopsied lesion, it is less likely to be associated with a concurrent malignancy, so close monitoring is usually appropriate. Lobular carcinoma in situ is not considered a preinvasive malignancy like DCIS, but does significantly increase the lifetime risk of breast cancer (RR 6.9–11, absolute risk 7.1% over 10 years). , Pleomorphic lobular carcinoma in situ may increase that risk even further. Radial scars are characterized microscopically by a fibroelastic core with radiating ducts and lobules. Radial scars and complex sclerosing lesions carry an 8–15% risk of DCIS or invasive malignancy at the time of excision. , – Radial scars are usually managed by excisional biopsy. There are limited data with which to determine the optimal screening strategy after atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ. Breast MRI may improve breast cancer detection over mammography alone, but it is associated with more biopsies in this population. The National Comprehensive Cancer Network is the only professional society with screening recommendations for those who have had atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ : Annual mammography (not before age 30 years). Consider tomosynthesis. Consider annual breast MRI (not before age 25 years). Clinical breast examinations every 6–12 months. Engage in breast self-awareness (women should be familiar with their breasts and report changes to their health care provider promptly). There have been conflicting data regarding the effect of hormonal contraception on breast cancer risk. A large meta-analysis in 1996 revealed a small increased risk of breast cancer among women with current or recent oral contraceptive use (RR 1.07, SD 0.02, P <.001). Similar findings were noted in a large cohort study in 2017 (RR 1.20, 95% CI 1.14–1.26). The absolute risk was quite small (one additional breast cancer diagnosis for every 7,690 women using hormonal contraception each year). In both studies, breast cancer risk returned to baseline 5–10 years after discontinuing hormonal contraception. , Most studies do not suggest an increased risk of breast cancer among women using a levonorgestrel intrauterine system (IUS) or depo-medroxyprogesterone injections. – There are limited data regarding the etonogestrel implant, but no study to date has demonstrated an increased breast cancer risk. The risks of hormonal contraception must be weighed against the health, social, and economic consequences of unplanned pregnancy, as well as the many noncontraceptive benefits of hormonal contraception. The maternal mortality rate in the United States in 2015 was 26.4 deaths per 100,000 pregnancies, which is comparable with the rate of excess breast cancer diagnoses (13 [95% CI 10–16]/100,000 person years) related to hormonal contraception suggested by the 2017 cohort study. , Hormonal contraception, particularly oral contraceptives, significantly decreases the risk of ovarian and endometrial cancers. , There are no screening guidelines that specifically address exposure to hormonal contraception, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer. Many fertility treatments cause an increase in circulating estrogen and progesterone levels, which theoretically could increase future breast cancer risk. Most studies have demonstrated no change or a decreased risk of breast cancer after fertility treatments. Few studies specifically evaluated the risk of early-onset breast cancer. Very limited data suggest an increased risk of breast cancer among specific populations, including women exposed to many high-dose cycles of clomiphene citrate and women undergoing in vitro fertilization before age 24 years. , The American Society for Reproductive Medicine states that there is “fair evidence that fertility drugs are not associated with an increased risk of breast cancer (Grade B).” No screening guidelines specifically address fertility treatment exposure, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer. Chest radiation therapy before age 30 years is a well-established risk factor for early-onset breast cancer. , Treatments of concern include mantle radiation for Hodgkin's lymphoma and moderate-dose chest radiation therapy for non-Hodgkin's lymphoma, leukemia, bone malignancies, or pediatric solid tumors (eg, Wilms tumor, neuroblastoma, and soft-tissue sarcoma). The cumulative incidence of invasive breast cancer in these patients is 13–20% by age 40–45 years, similar to that seen among BRCA1 or BRCA2 mutation carriers. – Risk is greatest among women treated with 40 Gy or more, but all women treated with 20 Gy or more are at increased risk for early-onset breast cancer. , , This increased risk is evident 8–10 years after completion of radiation therapy and does not plateau at any point after treatment. – , Early initiation of breast cancer screening is effective for reducing stage at diagnosis in this population. Both mammography and breast MRI are effective screening studies after chest radiation therapy, but mammography has higher specificity. , – Multiple professional organizations have published screening guidelines for women with a history of chest radiation therapy (Table ). There are limited data to suggest superiority of one screening protocol over others. Shared decision making, including the discussion of risks of false positives and negatives, is recommended when deciding on a screening strategy. Breast cancer survivors remain at risk for a second breast cancer, but the risk for a second early-onset breast cancer among young breast cancer survivors is unknown. Among survivors of any age without a known cancer gene mutation, the risk of a second breast cancer is approximately 3% and 7% at 10 and 15 years after diagnosis, respectively. There are no data regarding risk of early-onset breast cancer in women with ovarian cancer in childhood, adolescence, or early adulthood. After breast cancer treatment, survivors require clinical and imaging follow-up to assess for recurrence and second malignancies. Both the National Comprehensive Cancer Network and the European Society of Breast Cancer Specialists recommend annual mammograms starting 6–12 months after completion of treatment. , Breast MRI should be considered in patients at high risk for a second cancer (eg, BRCA1 or BRCA2 mutation carriers) , (see Appendix 6 [ http://links.lww.com/AOG/B869 ] for complete evidence summary). Objective measures of health disparities are well established, and health disparity populations exhibit differences in rates of mammography screening, age at breast cancer diagnosis, stage at time of diagnosis, and rates of cancer treatment. African American women are significantly more likely to experience higher mortality from breast cancer compared with white women (Fig. ). Other health disparity groups, such as American Indians and Alaska Natives, Asians, Hispanics, and Native Hawaiians and other Pacific Islanders, are affected but often inadequately studied, as are sexual and gender minority persons. The increased incidence of more-aggressive tumor types only partly explains the survival gap for black women. , Social determinants of health, such as systemic racism, poverty, and the environment, greatly affect cancer screening rates and outcomes. Health literacy, childcare concerns, financial difficulties, and transportation affect the likelihood of receiving preventive health services such as mammography. , Geography is a particularly important factor. Rural women are more likely to live in poor counties, with greater barriers to accessing primary care. Poverty or lack of a regular primary care provider who recommends mammography is highly predictive of not being screened. , In general, poverty status correlates with more advanced stage at diagnosis, receiving less aggressive treatment, and higher risk of all-cause mortality. Physical proximity to urban centers is not a panacea. In 2014, African American women with breast cancer in Georgia living in isolated rural areas were 45% more likely to die than white women, whereas African American women living in urban areas were 24% more likely to die than white women. Provider-level bias and discrimination in breast cancer care treatment exist. For example, when genetic testing is indicated, African American women are less likely to be referred for genetic testing for pathogenic variants than white women. , African American women are also less likely to receive any type of lymph node surgery for axillary staging overall. Women of lower socioeconomic status are adversely affected by lack of health insurance coverage. Cost affects primary care utilization and is a factor in patient decision making regarding mammography. By one estimate, up to 37% of the mortality difference in breast cancer among black compared with white women can be attributed to disparities in health insurance. Intensive focus on modifiable system factors would be beneficial, such as expanding insurance coverage, addressing transportation barriers to appointments, and increasing access to primary care. The use of patient navigators and advocates, translator services, and tracking systems across different health systems could reduce the effect of limited health literacy, mistrust, and negative prior experiences with health care. General practitioners who provide counseling and recommendations on health care preventive services can improve the rates of mammography for underscreened groups, such as recent immigrants. Bias by health care providers and health systems leading to disparate rates of services offered to patients should be corrected, and to further decrease differences in mortality, emphasis should be placed on ensuring equal treatment after diagnosis. Groups such as the Black Women's Health Imperative are at the forefront of working to reduce these disparities, and can serve as a resource for both patients and health care providers. Efforts to promote quality improvement and adherence to national guidelines are important. Breast cancer incidence is higher in younger African American women and other ethnic groups. In contrast, among postmenopausal women, breast cancer incidence is highest in white women. The proportion of breast cancer diagnoses by age for nonwhite patients with breast cancer peaks in the late 40s, whereas diagnosis for white patients peaks in their 60s; this phenomenon is known as the crossover effect (Fig. ). Most breast cancer research has been conducted on white women. Major professional society screening guidelines developed using this body of evidence might not be adequate for nonwhite populations. No national guidelines address this concern, but in 2018, the American College of Radiology commented that women at high risk, particularly black women and those of Ashkenazi Jewish descent, should be evaluated early in life to discuss potential benefit from supplemental screening. Consideration should be given to encouraging screening before age 50 years, especially for African American women (see Appendix 7, available online at http://links.lww.com/AOG/B870 , for complete evidence summary). Although there are no validated tools or best practices specific to identifying risk factors or estimating the risk of early-onset breast cancer, there are multiple tools that may be helpful to identify short-term risk in younger women. Current best practices aim to identify women at risk of familial cancer syndromes on the basis of family history to determine who may benefit from genetic testing. The three most widely used tools for predicting BRCA gene carrier probability are BRCAPRO, BOADICEA (the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm), and Penn II. BRCAPRO and BOADICEA also provide cancer risk estimates in addition to estimates of likelihood of genetic mutations. These models might be useful to direct women to genetic testing and counseling who are at increased risk of genetic mutations that pose a high risk of early-onset disease. BRCAPRO is a validated statistical program to estimate individual carrier probabilities on the basis of family history. It is not specific to any age range and does not directly estimate the risk of early-onset cancer, but rather the risk of carrying a BRCA1 or BRCA2 mutation. BOADICEA likewise was developed using population data from families in the United Kingdom to create a model based on family history and requires detailed family pedigree. The Penn II model uses clinical questions based on family history to reach a carrier probability, but does not calculate cancer risks. Once a BRCA1 or BRCA2 mutation is identified the Stanford risk-assessment tool for BRCA carriers may aid in decision making for preventive measures based as it provides age-related risk of cancer and compares multiple intervention strategies. Additional widely validated models to assess cancer risk include the Tyrer-Cuzick, modified Gail, and Breast Cancer Surveillance Consortium models. None specifically assess risk of early-onset or premenopausal breast cancer, although most provide estimated 5- or 10-year cancer risk as well as lifetime risk of breast cancer. No models used validation cohorts with patients younger than 20 years. The modified Gail model has been validated in women 35 years and older to assess 5-year invasive cancer risk. The Tyrer-Cuzick model has been studied in women older than age 20 years to assess 10-year cancer risk and has been shown to perform better in women with a family history of breast cancer. The Breast Cancer Surveillance Consortium risk calculator is validated for women older than age 35 years to provide 5- and 10-year risks and includes family history factors as well as breast density in the calculation. There are limited data on the use of these models to specifically address cancer risk reduction in young women. Family history should be collected and updated periodically to identify patients who may be at increased risk of predisposing genetic mutations. Tools that may aid in collecting family history are the Ontario Family History Assessment Tool, Manchester Scoring System, Referral Screening Tool, Pedigree Assessment Tool, and FHS-7. , There is no evidence to recommend one method over another. Those who screen positive or who meet published guidelines for qualifying family histories should be referred for genetic counseling and testing. There are no guidelines or best practices for identifying risk factors or for the use of tools to estimate risk specific to early-onset breast cancer. However, multiple organizations provide guidance for assessing risk of breast cancer in general. The U.S. Preventive Services Task Force advocates use of brief familial assessment tools to assess women with a personal or family history of breast, ovarian, tubal, or peritoneal cancer or who have an ancestry associated with BRCA1 or BRCA2 gene mutations. The U.S. Preventive Services Task Force reviewed six tools that were adequately validated, but found insufficient evidence to recommend one tool over another. Other organizations likewise do not advocate for use of any specific tool. , – National Comprehensive Cancer Network guidelines on breast cancer risk reduction recommend assessing family history and referring to genetic counseling when appropriate as well as use of the modified Gail or Tyrer-Cuzick model to assess risk among women older than age 34 years. The National Comprehensive Cancer Network has also established criteria for genetic testing for high-risk mutations. These guidelines recommend assessment no earlier than age 18 years based on family history. No specific tool is recommended, and the recommendations are not specific to reducing the risk of early-onset cancer (see Appendix 8, available online at http://links.lww.com/AOG/B871 , for complete evidence summary). Shared decision making is a key component of patient-centered health care, particularly because there is often more than one option for screening. Although patient decision aids and risk calculators help enumerate risk and are adjuncts to shared decision making, the process is more involved. Using narrative risk communication strategies, communicating absolute rather than RR, and managing framing bias are important considerations in communicating risk of early-onset breast cancer. Many decision aids and calculators are directed to specific populations (eg, subtypes or age ranges), but none are specific for communicating risk of early-onset breast cancer. Several tools may be useful: Families Sharing Health Assessment and Risk Evaluation (Families SHARE, a product of the National Institutes of Health's National Human Genome Research Institute) is a decision aid that is useful for shared decision making for individuals of varied age groups and can be used within and outside of an office setting. Breast Screening Decisions (developed collaboratively by the Weill Cornell Medical College and Sloan Kettering Cancer Center) is directed to women aged 40–49 years. Breast Cancer Screening (PDQ) has both a patient and health care provider tool, which can be used as companion documents. The University of Wisconsin School of Public Health's Health Decision tool was originally created and tested at the University of California, San Francisco. – It includes a breast cancer screening module that can be integrated into some electronic health record systems. Studies of decision aids for breast cancer prevention in BRCA1 and BRCA2 mutation carriers demonstrated that cancer-related distress was reduced among those who used a decision aid compared with those who did not. Decisional conflict did not change with use of the aid. , The following tools may be useful for women at high risk of hereditary breast and ovarian cancer: The Cancer Risk Education Intervention Tool is a web-based (noninteractive) adjunctive tool for use in low socioeconomic settings and among ethnically diverse women. The Stanford Shared Decision Making Tool for women with BRCA1 or BRCA2 was developed to guide decision making about screening and treatment based on calculated risk. For minority groups, the Health Belief Model was used as a construct for developing a school-based classroom and online tool that increased knowledge about breast cancer risk among African American women aged 20–39 years. Because we anticipated that a literature search would find limited information specific to communicating risk of early-onset breast cancer, we deliberately conducted a broad search encompassing other aspects of breast cancer and other cancers and health conditions. Patient decision aids for colorectal cancer screening have been shown to improve knowledge and interest in screening compared with no information, but are no better than general colorectal cancer screening information. Healthwise Knowledge Base is an evidence-based interactive platform to inform patients about mammogram initiation that includes a shared decision making breast cancer screening tool for women aged 40–50 years (see Appendix 9, available online at http://links.lww.com/AOG/B872 ), as well as a tool for assisting in decisions about BRCA testing. The user's concerns, desires, and fears are weighed in response to evidence provided about the risks and benefits of screening, and a score indicating preferences and readiness for screening is calculated. A decision analytic model was used to improve estimation of benefits and risks for patients undergoing thrombolysis, with the added benefit that this computerized decision aid can be embedded in an electronic health record. This approach could be translated to support integration of the Gail or Families SHARE model, for example, into a primary care or a woman's personal electronic health record. There are no current major professional society or health services guidelines about communicating the risk for early-onset breast cancer. Shared decision making has been endorsed by ACOG for deciding the age at which to initiate breast cancer screening. The American College of Obstetricians and Gynecologists acknowledges the importance of screening for social determinants of health in all patients, as these factors may influence decision making and communication. U.S. Preventive Services Task Force guidelines do not address early-onset breast cancer risk, except to state that the recommended screening guidelines do not apply to women with prior chest radiation or known underlying genetic mutations such as BRCA1 or BRCA2 . National Institute of Health Care Excellence guidelines recommend providing information and support for decision making, but do not recommend any specific tool or decision aid. National Institute of Health Care Excellence guidelines regarding familial breast cancer also recommend the use of shared decision making, materials, and decision aids as well as standardizing the discussion involved in counseling patients and families at risk for familial breast cancers (see Appendix 9 [ http://links.lww.com/AOG/B872 ] for complete evidence summary). There is limited evidence for risk modification specific to the outcome of early-onset breast cancer. The evidence for risk reduction among younger women is most robust for BRCA mutation carriers. Risk-reducing bilateral mastectomy should be considered in women with a genetic mutation conferring a high risk of breast cancer. There are no guidelines or studies addressing the age at which risk-reducing mastectomies should be undertaken. Age-related risk estimation tables may be useful to counsel women with BRCA mutations on the timing of prophylactic procedures. There is no evidence supporting risk-reducing mastectomies for women with low-risk genes or whose risk is based on nonhereditary factors alone. We found no evidence to support oophorectomy for the purposes of preventing early-onset breast cancer. The use of bilateral salpingo-oophorectomy to prevent lifetime risk of breast and ovarian cancer has been estimated to be as high as 50% for BRCA1 and BRCA2 carriers, although more recent publications question these results. There are no guidelines or studies about the use of risk-reducing agents expressly for the purpose of reducing the risk of early-onset breast cancer. Tamoxifen is the only agent indicated for use in premenopausal women at increased risk of breast cancer, and is recommended for women with 5-year risk of 1.7% or higher. The risks and benefits in women younger than 35 years is not known. Most large trials of chemoprevention were performed in older women who had completed menopause. The National Surgical Adjuvant Breast and Bowel Project P-1 trial found a 44% decrease in cancer among women younger than 50 years treated with tamoxifen for chemoprevention. There are limited data regarding the magnitude of risk reduction with the use of tamoxifen for BRCA1 and BRCA2 mutation carriers or women with prior thoracic radiation. However, cohort data suggest there might be a benefit for BRCA2 carriers; the National Surgical Adjuvant Breast and Bowel Project P-1 study showed a nonsignificant 62% decrease relative to placebo (RR 0.38, 95% CI 0.06–1.56). , Although other European studies have shown mixed effects, this overall reduction is supported by a systemic review of randomized controlled prevention trials across all studied populations showing a 44% decrease in the risk of breast cancer for women younger than 50 (hazard ratio 0.66, 95% CI 0.52–0.85). There is limited evidence for the modification of health behaviors to reduce the risk of early-onset breast cancer. A recent meta-analysis assessed numerous risk factors for BRCA carriers. Later age at the time of first live birth was associated with a decreased lifetime risk of breast cancer for BRCA1 carriers (effect size for women aged 30 years or older=0.65 vs women aged younger than 30 years, 95% CI 0.42–0.99). There was no effect of age at first birth for BRCA2 carriers. Breastfeeding also appeared protective for lifetime risk of cancer for BRCA1 carriers, although meta-analysis could not be performed because of study heterogeneity. Reported effects based on case–control studies showed a 32–50% decreased risk if breastfeeding continued for more than 1 year compared with never breastfeeding. Additionally, three or more live births also appeared to have a protective effect for BRCA1 carriers (effect size=0.57, 95% CI 0.39–0.85) as well as BRCA2 carriers (effect size=0.52, 95% CI 0.30–0.86), compared with nulliparity. For BRCA1 or BRCA2 carriers, there were no significant or reliably duplicated results of effects of alcohol consumption, oral contraceptive use, or smoking. , In review articles on risk factors for women at average risk, there was no reliable effect seen for alcohol consumption or modification of other dietary factors for premenopausal breast cancer. , There are no guidelines specific to the prevention of early-onset breast cancer. Those that may be considered relevant address lifetime breast cancer risk reduction, largely among women older than age 35 years. The National Comprehensive Cancer Network recommends tamoxifen, 20 mg/d, for up to 5 years for women aged 35 years and older with a high 5-year risk of breast cancer, defined as a 5-year risk of 1.7% or higher using the Gail model, or prior lobular carcinoma in situ. U.S. Preventive Services Task Force guidelines for reducing the risk of primary cancer state that women at increased risk should engage in shared decision making regarding chemoprevention. The National Comprehensive Cancer Network advises a healthy lifestyle for reduction of risk for breast cancer for all women, though the magnitude of this reduction and whether it reduces the risk of early-onset breast cancer or premenopausal breast cancer is unknown. Elements of healthy lifestyle advised by the National Comprehensive Cancer Network include limited alcohol consumption, vigorous physical activity, maintaining a healthy weight, and breastfeeding (see Appendix 10, available online at http://links.lww.com/AOG/B873 , for complete evidence summary). Breast self-examination is no longer part of major society guidelines for average risk women given the high number of false positives and absence of supportive evidence for benefit. , , Our literature review found no evidence for its use in women at risk for early-onset breast cancer, but women should be counseled to be familiar with their breasts and promptly report changes to their breasts to their health care provider. Survivorship in women with early-onset breast cancer is a critical component to initial evaluation and treatment as well as ongoing care. Chemotherapy is often and variably responsible for chemotherapy-induced amenorrhea, menopause, or true ovarian failure, resulting in consequences such as infertility or subfertility, bone loss, and increased cardiac risk as well as menopausal symptoms, which can have a significant effect on quality of life. Age at diagnosis, receptor status, and treatment regimen are important considerations in managing ongoing care for women affected by early-onset breast cancer. The National Comprehensive Cancer Network and the American Society of Clinical Oncology have produced comprehensive guidelines for survivorship. , The American Cancer Society and the American Society of Clinical Oncology jointly created survivorship guidelines after systematic review in 2015. Although not specific for early-onset breast cancer, ACOG provides resources about managing gynecologic issues in women with breast cancer, many of which are applicable for women with early-onset breast cancer. The American College of Obstetricians and Gynecologists recommendations include use of nonhormonal interventions for symptomatic patients, because data are conflicting about the deleterious effects of hormone therapy on recurrence and overall survival rates. Although not specific to women with early-onset breast cancer, the North American Menopause Society and the International Society on Women's Sexual Health have co-authored recommendations regarding the treatment of genitourinary syndrome of menopause in women with breast cancer. Management of women who have or have had early-onset breast cancer should include attention to the issues of contraception, fertility, and pregnancy: Effective contraception is often overlooked as part of the treatment regimen for patients with early-onset breast cancer, and family planning consultation should be considered. The copper IUS is the preferred contraceptive method for women with breast cancer, although the levonorgestrel IUS can safely be used in combination with tamoxifen. , The preferred method of emergency contraception is the copper-containing IUS, although progestin regimens can also be used. All women with early-onset breast cancer should have fertility preservation counseling. Oocyte and embryo cryopreservation is considered first-line treatment. Treatment with gonadotropin-releasing hormone agonist during chemotherapy should be considered when ovarian oocyte and embryo cryopreservation is not possible; it affords some protection to the ovary and is associated with increased fertility rates when compared with no treatment. Aromatase inhibitors and gonadotropin-releasing hormone agonist triggers should be used when employing controlled ovarian stimulation for women undergoing fertility treatments with a history of early-onset breast cancer to lower estrogen levels. Prenatal genetic diagnosis should be considered in women with BRCA mutations or other documented germ line mutations undergoing in vitro fertilization procedures. Ovarian tissue harvesting offers a promising alternative to cryopreservation therapies. Pregnancy after a diagnosis of early-onset breast cancer has not been shown to increase the risk of recurrence. When considering timing, pregnancy occurring at least 10 months after breast cancer diagnosis was not found to be harmful and may even contribute to survivorship. When breast cancer is diagnosed in pregnancy, chemotherapy can be safely instituted in the second and third trimesters. See Appendix 1 ( http://links.lww.com/AOG/B864 ) for complete evidence summary. The evidence review and subsequent stakeholder discussion revealed the following research gaps and opportunities for early-onset breast cancer (see Appendix 11, available online at http://links.lww.com/AOG/B874 , for a more in-depth assessment): Develop risk-assessment tools specific to early-onset breast cancer Optimize integration of risk assessment into primary care visits and electronic health records Obtain data on and determine optimal screening for nonwhite populations Determine risks associated with dense breasts in young women Determine appropriate adjunctive screening for young women with dense breasts Validate epidemiologic data largely based on European populations in U.S. women, including underrepresented subgroups Develop strategies to eliminate implicit bias among health care providers and medical systems Expand screening, genetic counseling, and testing among high-risk women Develop and validate tools for communicating early-onset breast cancer risk to patients Develop and validate training techniques for health care providers to screen, test, and initiate risk-reducing strategies in women at risk for early-onset breast cancer Determine safety and optimal timing of pregnancy after treatment for early-onset breast cancer Optimize fertility preservation in women undergoing treatment for early-onset breast cancer
Biochanin A restored the blood–brain barrier in cerebral ischemia-reperfusion in rats
bfe43db2-22b9-4499-b881-24a78619c22a
11288263
Anatomy[mh]
Brain is an element of the central nervous system that contains numerous nerve cells. Homeostasis and function of the brain are important to elucidate brain damage , . The cross-ancestry genetic risk score has been reported to predict ischemic stroke independently of clinical risk factors and outperform previous genetic risk assessment . It has been stated that Biochanin A (BCA) shows protective effects in angiotensin II-induced model rats and may cause an increase in endophilin A2 expression and a decrease in angiotensin II type 1 receptor expression due to the inhibition of inflammatory responses . BCA (C16H12O5) is an O-methylated natural flavonoid found in red clover, chickpeas, and other legumes, belonging to the phytoestrogen family . Recent studies showed that BCA has various pharmacological properties, including anti-tumorigenesis, anti-oxidation, anti-inflammatory, and hypoglycemic effects - . BCA was reported to be effective in the treatment of cerebral Ischemia-reperfusion (IR) injury in rats , . BCA was also specifically shown to prevent the initiation of inflammatory response and downregulate the expression of pro-inflammatory factors in rats , . Cerebral IR can clinically cause vasogenic edema and hemorrhagic transformation and may result in mortality if not treated in the acute phase . SMI71 is a specific marker to show rat blood–brain barrier (BBB). Many studies showed that SMI71 could be used to investigate BBB integrity . SMI 71 is an antibody designed for detecting a rat endothelial protein localized in regions containing BBB. SMI71 does not react with endothelial cells in periventricular and peripheral tissues such as the liver, heart, adrenal glands, skeletal muscle, intestine, thymus, lymph nodes, pancreas, thyroid, and skin. Notably, the reactivity with this antibody emerges in newborn rats concurrently with the maturation of the BBB , . This study aimed to investigate the effectiveness of BCA on the histology of BBB after cerebral IR by examining the expression level of components of the BBB. Ethical approval and animal housing All animal experiments were approved by the Animal Experimentation Local Ethics Committee of Dicle University (2023/04). Animals were allowed access to water and food ad libitum and housed in cages (12 h/12 h dark/light period, 23±1°C). BCA was purchased from Merck (catalog no: D2016, Germany). Surgical procedures All procedures were performed under anesthesia. A total of 24 Wistar albino female rats were assigned to three categories: sham, IR, and IR+ BCA (n=8 per group). The rats were fixed on the operating table in the supine position, and the neckline was cleaned with povidone iodine. Using surgical scissors, a midline incision was made from the upper edge of the sternum to the hyoid bone. The incision area was enlarged using a tissue retractor through the trachea. Then, the paratracheal muscles were dissected, and the common carotid artery (CCA) was observed. CCA was occluded for 2 h via a micro bulldog clamp on the left CCA approximately 1 cm proximal to the carotid bifurcation. After cerebral Ischemia, the clamp was removed, the tissues were placed back to their anatomical location, and the skin and subcutaneous fascia were sutured. Cerebral reperfusion was allowed for 24 h. A 200 mM stock solution was prepared by dissolving BCA in DMSO solution. Sham group: Cerebral artery occlusion was not performed. Only the left CCA was isolated and placed back to anatomical location. Animals were given 1 mL of DMSO intraperitoneally for 7 days. IR group: Cerebral IR procedure was performed. Animals were given 1 mL of DMSO intraperitoneally for 7 days. IR+BCA group: After IR treatment, 20 mg/kg BCA was administered to rats intraperitoneally for 7 days. Malondialdehyde and total antioxidant status/total oxidant status At the end of the experimental protocol (at the end of the seventh day), all animals were sacrificed under anesthesia. Malondialdehyde (MDA, MERCK, catalog no: MAK085), total antioxidant (TAS, mmol Trolox Equiv./L), and oxidant status (TOS, μmol H 2 O 2 Equiv./L) kits were commercially purchased (Rel Assay Diagnostics, Turkey). Blood samples of each rat were centrifuged at 2000 rpm for 10 min, and the supernatant was collected. The measurement of MDA, TAS, and TOS was done. Serum plasma of blood samples were further analyzed for MDA, TAS, and TOS levels that were determined according to Durgun et al. Histological tissue processing Cerebral tissues were excised for histological sampling. Dissected cerebral samples were further analyzed for histological evaluation. Samples were immersed in zinc-formalin, dehydrated through grading alcohol series, and incubated in paraffin wax. Sections of 5 μm were cut from paraffin blocks and stained for hematoxylin–eosin dye and immunostaining . Immunohistochemical examination Cerebral sections were dewaxed, hydrated in grading alcohol series, and washed in distilled water. Hydrogen peroxide (H 2 O 2 ; 3%) was dropped on slides to block endogen peroxidase activity. After washing in PBS, sections were incubated with anti-BBB (catalog no: 836804, Biolegend, California, USA), overnight at+4°C. Sections were biotinylated and allowed to react with streptavidin peroxidase solution (Thermo Fischer, USA) for 15 min. After PBS washing, diaminobenzidine (DAB) chromogen was used as a chromogen to observe color change. The reactions were stopped with PBS solution, and sections were counter-stained with hematoxylin dye. Slides were mounted and imaged with Zeiss Imager A2 light microscope. All images were processed and quantified using the ImageJ software. Negative control staining was done similar to the same protocol, but only sections were incubated with PBS instead of an antibody of interest. Image J analysis The staining intensity of BBB expression was measured by the Image J software (version 1.53, http://imagej.nih.gov/ij ). Measurement was performed by the method of Crowe et al. . Quantification was recorded by analyzing 10 fields from each specimen per group . In specimens, the brown color stands for the positive expression of the antibody of interest, while the blue color represents a negative expression of the antibody of interest. Signal intensity (expression) from a field was calculated by dividing the intensity of the antibody of interest by the whole area of the specimen. A value for staining area/whole area was calculated for each specimen from ten fields. An average value was measured for groups and analyzed for semi-quantitative immunohistochemistry scoring. Statistical analysis Statistical analysis was done using the IBM SPSS 25.0 software (IBM, Armonk, New York, USA). Data distribution was done by the Shapiro-Wilk test. The data were recorded as median (IQR). The non-parametric Kruskal-Wallis test was used for analyses between more than two groups, and the post-hoc Dunn test was used due to the small number of animals in the groups. Statistical significance was accepted for values p<0.05. All animal experiments were approved by the Animal Experimentation Local Ethics Committee of Dicle University (2023/04). Animals were allowed access to water and food ad libitum and housed in cages (12 h/12 h dark/light period, 23±1°C). BCA was purchased from Merck (catalog no: D2016, Germany). All procedures were performed under anesthesia. A total of 24 Wistar albino female rats were assigned to three categories: sham, IR, and IR+ BCA (n=8 per group). The rats were fixed on the operating table in the supine position, and the neckline was cleaned with povidone iodine. Using surgical scissors, a midline incision was made from the upper edge of the sternum to the hyoid bone. The incision area was enlarged using a tissue retractor through the trachea. Then, the paratracheal muscles were dissected, and the common carotid artery (CCA) was observed. CCA was occluded for 2 h via a micro bulldog clamp on the left CCA approximately 1 cm proximal to the carotid bifurcation. After cerebral Ischemia, the clamp was removed, the tissues were placed back to their anatomical location, and the skin and subcutaneous fascia were sutured. Cerebral reperfusion was allowed for 24 h. A 200 mM stock solution was prepared by dissolving BCA in DMSO solution. Sham group: Cerebral artery occlusion was not performed. Only the left CCA was isolated and placed back to anatomical location. Animals were given 1 mL of DMSO intraperitoneally for 7 days. IR group: Cerebral IR procedure was performed. Animals were given 1 mL of DMSO intraperitoneally for 7 days. IR+BCA group: After IR treatment, 20 mg/kg BCA was administered to rats intraperitoneally for 7 days. At the end of the experimental protocol (at the end of the seventh day), all animals were sacrificed under anesthesia. Malondialdehyde (MDA, MERCK, catalog no: MAK085), total antioxidant (TAS, mmol Trolox Equiv./L), and oxidant status (TOS, μmol H 2 O 2 Equiv./L) kits were commercially purchased (Rel Assay Diagnostics, Turkey). Blood samples of each rat were centrifuged at 2000 rpm for 10 min, and the supernatant was collected. The measurement of MDA, TAS, and TOS was done. Serum plasma of blood samples were further analyzed for MDA, TAS, and TOS levels that were determined according to Durgun et al. Cerebral tissues were excised for histological sampling. Dissected cerebral samples were further analyzed for histological evaluation. Samples were immersed in zinc-formalin, dehydrated through grading alcohol series, and incubated in paraffin wax. Sections of 5 μm were cut from paraffin blocks and stained for hematoxylin–eosin dye and immunostaining . Cerebral sections were dewaxed, hydrated in grading alcohol series, and washed in distilled water. Hydrogen peroxide (H 2 O 2 ; 3%) was dropped on slides to block endogen peroxidase activity. After washing in PBS, sections were incubated with anti-BBB (catalog no: 836804, Biolegend, California, USA), overnight at+4°C. Sections were biotinylated and allowed to react with streptavidin peroxidase solution (Thermo Fischer, USA) for 15 min. After PBS washing, diaminobenzidine (DAB) chromogen was used as a chromogen to observe color change. The reactions were stopped with PBS solution, and sections were counter-stained with hematoxylin dye. Slides were mounted and imaged with Zeiss Imager A2 light microscope. All images were processed and quantified using the ImageJ software. Negative control staining was done similar to the same protocol, but only sections were incubated with PBS instead of an antibody of interest. The staining intensity of BBB expression was measured by the Image J software (version 1.53, http://imagej.nih.gov/ij ). Measurement was performed by the method of Crowe et al. . Quantification was recorded by analyzing 10 fields from each specimen per group . In specimens, the brown color stands for the positive expression of the antibody of interest, while the blue color represents a negative expression of the antibody of interest. Signal intensity (expression) from a field was calculated by dividing the intensity of the antibody of interest by the whole area of the specimen. A value for staining area/whole area was calculated for each specimen from ten fields. An average value was measured for groups and analyzed for semi-quantitative immunohistochemistry scoring. Statistical analysis was done using the IBM SPSS 25.0 software (IBM, Armonk, New York, USA). Data distribution was done by the Shapiro-Wilk test. The data were recorded as median (IQR). The non-parametric Kruskal-Wallis test was used for analyses between more than two groups, and the post-hoc Dunn test was used due to the small number of animals in the groups. Statistical significance was accepted for values p<0.05. Oxidative stress findings Statistical analysis of biochemical and histopathologic scores is shown in . MDA and TOS values were significantly increased in the IR group compared with the sham group. TAS value was statistically decreased in the IR group compared with the sham group. After BCA treatment, MDA and TOS levels statistically decreased and TAS content statistically increased in the IR+BCA group compared with the IR group. Histopathologic findings Hematoxylin–eosin staining of cerebral sections is shown in – . The sham group showed no pathological lesions in the cerebrum. Neurons were histologically normal along with normal vessels . In the IR group, cerebral cortex integrity was disrupted with degenerated neurons and vascular structures. A high number of cells were with pyknotic nucleus . Compared with the IR group, BCA treatment restored the cerebral pathologies after IR in the IR+BCA group . BBB immunoreactivity is shown in – . High expression of BBB was recorded in the sham group around the blood vessels where the nerve–blood barrier existed . BBB immunoreactivity was decreased in IR due to the disruption of BBB . Post-BCA treatment increased the BBB immune activity by restoring the BBB in the cerebral cortex. BBB immune reactivity was intensely observed around the regions where the barrier existed compared with the IR group . Negative and positive control immunostaining of the cerebral section of the healthy rat is shown in and , respectively. Image J analysis The staining intensity of BBB expression is shown in . BBB expression was downregulated after cerebral IR injury. However, BCA treatment upregulated BBB expression with its antioxidant properties. Statistical analysis of biochemical and histopathologic scores is shown in . MDA and TOS values were significantly increased in the IR group compared with the sham group. TAS value was statistically decreased in the IR group compared with the sham group. After BCA treatment, MDA and TOS levels statistically decreased and TAS content statistically increased in the IR+BCA group compared with the IR group. Hematoxylin–eosin staining of cerebral sections is shown in – . The sham group showed no pathological lesions in the cerebrum. Neurons were histologically normal along with normal vessels . In the IR group, cerebral cortex integrity was disrupted with degenerated neurons and vascular structures. A high number of cells were with pyknotic nucleus . Compared with the IR group, BCA treatment restored the cerebral pathologies after IR in the IR+BCA group . BBB immunoreactivity is shown in – . High expression of BBB was recorded in the sham group around the blood vessels where the nerve–blood barrier existed . BBB immunoreactivity was decreased in IR due to the disruption of BBB . Post-BCA treatment increased the BBB immune activity by restoring the BBB in the cerebral cortex. BBB immune reactivity was intensely observed around the regions where the barrier existed compared with the IR group . Negative and positive control immunostaining of the cerebral section of the healthy rat is shown in and , respectively. The staining intensity of BBB expression is shown in . BBB expression was downregulated after cerebral IR injury. However, BCA treatment upregulated BBB expression with its antioxidant properties. IR injury is listed as one of the causes of tissue damage in clinics such as myocardial infarction, stroke, and organ transplantation. After re-blooding of tissue, more damage occurs in IR as the paradox of IR injury. This process is quite complex and not fully understood yet . During IR injury, the production of reactive oxygen species (ROS) increases, and alterations in mitochondrial homeostasis lead to oxidative damage in tissues and eventually induce the proinflammatory response , . Medicinal plants with antioxidant activity alleviate the IR injury - . BCA is a medicinal plant that has a similar action to melatonin . BCA exhibits antioxidant properties, which can help neutralize ROS generated during reperfusion. By scavenging ROS, BCA may reduce oxidative stress and prevent cellular damage . Additionally, IR injury triggers an inflammatory response, leading to tissue damage. BCA with its anti-inflammatory properties may reduce inflammation and favor the production of pro-inflammatory cytokines, attenuating tissue inflammation . BCA has also exerted vasodilatory effects, which may help improve blood flow and tissue perfusion during reperfusion following Ischemia. Enhanced vasodilation can deliver more oxygen and nutrient transport to the ischemic area, potentially reducing the extent of injury . This study showed that IR injury increased the MDA content and TOS value and lowered the TAS value. BCA treatment improved the scores because it has many biological activities . BCA is a good free radical scavenger, and it induces the antioxidant system after IR, especially its antioxidant properties. IR causes the disruption of the cerebral cortex and degeneration of neurons. Pathological alterations are restored after BCA treatment . Due to the neuroprotective effects of BCA, the cerebral cortex is histologically improved by BCA treatment after IR injury. BBB protects the delicate nervous tissue from pathogens and microbes and its maintenance is quite vital for cerebral homeostasis. BBB endothelial cells help the regulation of BBB by induction of mechanical induction . BECs are different from other peripheral endothelial cells such as possessing low adhesion molecules, high number of mitochondria, and high polarization . Impairment of BBB causes alteration in the semi-selective permeability of BBB, leading to numerous neurological disorders. IR injury deteriorates the BBB and causes the upregulation of pro-inflammatory cytokines (e.g., TNF-α, IL-1β, and IL-6) . BCA is a medicinal plant with many pharmacological activities such as anti-inflammatory and antioxidant properties. Guo et al., showed that BCA protected the neural tissue against the cerebral IR via oxidative stress and inflammation pathway . El-Sayed et al. showed that BCA had neuroprotective effects in an epileptic animal model via modulation of inflammatory and autophagy pathways. In this study, cerebral IR injury caused the disruption of BBB and reduced immunoreactivity of BBB. Administration of BCA upregulated the BBB expression and restored the BBB because its expression was increased compared with the IR group . Although phytotherapy is acknowledged as a healing approach endorsed by national health authorities, it is still not officially recognized as a medical specialty . However, we suggest that BCA treatment may modulate the components of BBB via induction of inflammation pathway and anti-oxidative stress mechanism. Cerebral IR injury causes the generation of free radicals and deteriorates the cerebral histology and BBB. With its antioxidant, BCA treatment reduced the ROS generated during IR injury and promoted the cellular scavenging system. Additionally, with its anti-inflammatory properties, BCA restored the BBB by modulating the inflammatory response pathway after cerebral IR injury.
Signatures of transmission in within-host
9353aae5-759f-44da-bc2c-93aad383ca1a
11777664
Biochemistry[mh]
Reducing the global burden of tuberculosis urgently requires reducing the number of incident Mycobacterium tuberculosis complex (MTBC) infections. Yet the long and variable latency period of these infections makes it challenging to identify sources of transmission and thus intervene. Genomic epidemiology approaches have been powerfully applied to characterise MTBC global phylogenetic structure, migration and gene flow, patterns of antibiotic resistance, and transmission linkages. Yet transmission inference approaches have often failed to identify the majority of transmission linkages in high-incidence settings. , Further, although previous studies have identified heterogeneity in the number of secondary cases generated by infectious individuals and risk factors for onward transmission, , these are often difficult to generalise. Many crucial questions, including the contribution of asymptomatic individuals to transmission, remain unanswered. Novel, accessible approaches to reconstruct high-resolution transmission patterns are urgently needed so that public health programmes can identify environments driving transmission and risk factors for onward transmission. Commonly used approaches for MTBC transmission inference use single consensus genomes, representing the sequence of the most frequent alleles, from infected individuals. Closely related pathogen genomes are predicted to be more closely linked in transmission chains. For example, closely related MTBC consensus sequences, with a pairwise genetic distance under a given threshold, are considered clustered and potentially epidemiologically linked. However, MTBC evolves at a relatively slow rate. The result is that there might be limited diversity in outbreaks. Several genomic epidemiology studies reported that multiple individuals harboured identical MTBC genomes, making it difficult to reconstruct who infected who. This challenge highlights a need to recover more informative variation from pathogen genomes, a challenge not unique to MTBC. Population-level bacterial diversity within an individual, or within-host heterogeneity, can be attributed to mixed infections (ie, infections with more than one distinct MTBC genotype) or de novo evolution (ie, mutations that are introduced over the course of an individual’s infection). Previous research has found that a substantial proportion (10–20%) of infected individuals harbour mixed infections with genetically diverse populations of MTBC. , A portion of within-host heterogeneity is probably transmitted onward and therefore, within-host diversity captures potentially valuable epidemiological information about transmission history. Complex infections are also important clinically. Within-host heterogeneity is associated with poor treatment outcomes, , and heteroresistance—presence of bacteria cells exhibiting different levels of susceptibility to specific antibiotics—reduces the accuracy of diagnostics for antibiotic resistance. Given that minority variation is frequently observed, we might expect that it could improve resolution of transmission and phylogenetic inference. Yet there are many open questions about whether shared within-host variation is a predictor of transmission linkage and, more practically, how to recover this level of variation and incorporate it into transmission inferences. Currently, MTBC is most frequently cultured from sputum samples and sequenced with short reads to generate a single consensus sequence. First, this approach limits the variation recovered because culture imposes a severe bottleneck, because there might be small numbers of cells from minority populations in sampled sputum, competition or stochastic growth in culture might result in loss of minority variants, and cultured samples are often subdivided for sequencing. , Second, within-host variation, including mixed infections, is often excluded, in part due to an absence of validated methodological approaches for accurate recovery of such variation. , Third, repetitive genomic regions, including the PE and PPE gene families, among the most variant-rich and potentially informative regions of the genome, are excluded. - MTBC transmission is never directly observed, and in practice, epidemiological linkages are frequently unknown. This unknown makes it difficult to assess the performance of genomic methods in identifying true transmission linkages. We therefore aimed to leverage previously published household transmission studies to test whether household members—as a proxy for epidemiologically linked individuals—shared more minority variants than did unlinked individuals. We then aimed to test whether shared minority MTBC variation might augment fixed genomic differences in reconstructing epidemiological linkages and might enhance transmission inferences. Study design To characterise the epidemiological information held in within-host MTBC variation present in routinely generated Illumina sequence data from cultured isolates, we conducted a retrospective genomic epidemiology study in which we reanalysed sequence data from previously published MTBC household transmission studies, using household membership as a proxy for transmission linkage. We searched PubMed from database inception until Jan 31, 2024, for relevant articles published in English using the terms “Mycobacterium tuberculosis”, “whole genome sequencing”, “transmission”, and “household”. We selected studies for which both raw sequencing data were deposited on a public database and for which epidemiological data on household membership were additionally available. We also included a household study that focused on estimating the M tuberculosis substitution rate, but for which both genomic and household membership were available (Colangeli et al). We extracted information on epidemiological linkage such as household membership from the studies eligible for inclusion. This reanalysis was considered non-human subject research by the University of Utah Institutional Review Board (IRB_00176142) and hence was exempt from full approval by the Institutional Review Board. Procedures We processed raw sequence data with a previously described variant identification pipeline available on GitHub . Briefly, we trimmed low-quality bases and removed adapters with Trim Galore (version 0.6.5; stringency=3). We used CutAdapt (version 4.2) to further filter reads. We used Kraken2 to taxonomically classify reads, mapped reads with bwa (version 0.7.15), and removed duplicates with sambamba. We called variants with GATK 4.1 HaplotypeCaller, setting sample ploidy to one, and GenotypeGVCFs. We included variant sites with a minimum depth of 5× and a minimum variant quality score of 20 and constructed consensus sequences with bcftools consensus, excluding indels. We used the R package ape (version 5.7) to measure pairwise differences between samples and fit a maximum likelihood tree with IQ-TREE, with 1000 ultrafast bootstrap replicates. , Statistical analysis We considered minority variants as positions with two or more alleles each supported by at least 5× coverage at the same position, with the minor allele frequency above 1%, including variants across the full MTBC genome. We then quantified the proportion of minority variants occurring within PE and PPE genes. In subsequent analyses, we excluded PE and PPE genes, which might be more error prone. We compared mean per-sample minority variation found in different studies with the Wilcoxon rank sum test. We quantified the number of minority variants with different predicted variant effects, as categorised by SnpEff v.5.2. We measured associations between the total number of per-sample minority variants as well as minor allele frequency and per-sample median depth of coverage with Pearson’s correlation coefficient. For all tests, we used a significance threshold of p less than 0·05. We fit a logistic regression model for the number of per-sample minority variants including lineage, study, and sample median coverage with the base R function glm. We estimated odds ratios (ORs) for each covariate and characterised model uncertainty with 95% CIs. We then measured the number of minority variants shared between household members and the number of shared minority variants between epidemiologically unrelated pairs. To assess trade-offs in sensitivity and specificity in minority variant identification, we measured shared minority variants after applying increasingly conservative minor allele thresholds: 0·5%, 1·0%, 2·0%, 5·0%, 10·0%, 20·0%, and 50·0%. We fit logistic regression models for pairwise epidemiological linkage: including (1) both genetic cluster membership (defined in different models by 12-single-nucleotide polymorphism [SNP] and five-SNP genetic distance thresholds) and shared minority variants, (2) only genetic cluster membership, and (3) only shared minority variants. We measured the performance of general linear models in classifying household pairs versus unlinked pairs with receiver operator characteristic (ROC) curves across all minor allele frequency thresholds, with the R package yardstick (version 1.3.1) and identified thresholds that maximised model ROC. We tested for correlations between genetic distance between MTBC consensus sequences and shared minority variants with Pearson’s correlation coefficient. For the Colangeli et al study, which reports sampling time, we measured the association between sampling time between donor and recipient transmission pairs and number of shared minority variants with Pearson’s correlation coefficient. Following variant identification, all analyses were conducted in R (version 4.2.2). Role of the funding source The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. To characterise the epidemiological information held in within-host MTBC variation present in routinely generated Illumina sequence data from cultured isolates, we conducted a retrospective genomic epidemiology study in which we reanalysed sequence data from previously published MTBC household transmission studies, using household membership as a proxy for transmission linkage. We searched PubMed from database inception until Jan 31, 2024, for relevant articles published in English using the terms “Mycobacterium tuberculosis”, “whole genome sequencing”, “transmission”, and “household”. We selected studies for which both raw sequencing data were deposited on a public database and for which epidemiological data on household membership were additionally available. We also included a household study that focused on estimating the M tuberculosis substitution rate, but for which both genomic and household membership were available (Colangeli et al). We extracted information on epidemiological linkage such as household membership from the studies eligible for inclusion. This reanalysis was considered non-human subject research by the University of Utah Institutional Review Board (IRB_00176142) and hence was exempt from full approval by the Institutional Review Board. We processed raw sequence data with a previously described variant identification pipeline available on GitHub . Briefly, we trimmed low-quality bases and removed adapters with Trim Galore (version 0.6.5; stringency=3). We used CutAdapt (version 4.2) to further filter reads. We used Kraken2 to taxonomically classify reads, mapped reads with bwa (version 0.7.15), and removed duplicates with sambamba. We called variants with GATK 4.1 HaplotypeCaller, setting sample ploidy to one, and GenotypeGVCFs. We included variant sites with a minimum depth of 5× and a minimum variant quality score of 20 and constructed consensus sequences with bcftools consensus, excluding indels. We used the R package ape (version 5.7) to measure pairwise differences between samples and fit a maximum likelihood tree with IQ-TREE, with 1000 ultrafast bootstrap replicates. , We considered minority variants as positions with two or more alleles each supported by at least 5× coverage at the same position, with the minor allele frequency above 1%, including variants across the full MTBC genome. We then quantified the proportion of minority variants occurring within PE and PPE genes. In subsequent analyses, we excluded PE and PPE genes, which might be more error prone. We compared mean per-sample minority variation found in different studies with the Wilcoxon rank sum test. We quantified the number of minority variants with different predicted variant effects, as categorised by SnpEff v.5.2. We measured associations between the total number of per-sample minority variants as well as minor allele frequency and per-sample median depth of coverage with Pearson’s correlation coefficient. For all tests, we used a significance threshold of p less than 0·05. We fit a logistic regression model for the number of per-sample minority variants including lineage, study, and sample median coverage with the base R function glm. We estimated odds ratios (ORs) for each covariate and characterised model uncertainty with 95% CIs. We then measured the number of minority variants shared between household members and the number of shared minority variants between epidemiologically unrelated pairs. To assess trade-offs in sensitivity and specificity in minority variant identification, we measured shared minority variants after applying increasingly conservative minor allele thresholds: 0·5%, 1·0%, 2·0%, 5·0%, 10·0%, 20·0%, and 50·0%. We fit logistic regression models for pairwise epidemiological linkage: including (1) both genetic cluster membership (defined in different models by 12-single-nucleotide polymorphism [SNP] and five-SNP genetic distance thresholds) and shared minority variants, (2) only genetic cluster membership, and (3) only shared minority variants. We measured the performance of general linear models in classifying household pairs versus unlinked pairs with receiver operator characteristic (ROC) curves across all minor allele frequency thresholds, with the R package yardstick (version 1.3.1) and identified thresholds that maximised model ROC. We tested for correlations between genetic distance between MTBC consensus sequences and shared minority variants with Pearson’s correlation coefficient. For the Colangeli et al study, which reports sampling time, we measured the association between sampling time between donor and recipient transmission pairs and number of shared minority variants with Pearson’s correlation coefficient. Following variant identification, all analyses were conducted in R (version 4.2.2). The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. We identified three household transmission studies for which both raw sequence data and epidemiological linkages were publicly available: a household transmission study in Vitória, Brazil (Colangeli et al ), a retrospective population-based study of paediatric tuberculosis in British Columbia, Canada (Guthrie et al ), and a retrospective population-based study in Oxfordshire, England (Walker et al ; ). Study design, sampling design, culture and sequencing methods, and MTBC lineage representation differed across studies . As reported in the original studies, we observed limited fixed variation between MTBC consensus sequences from isolates collected within the same household or among isolates from patients with epidemiological linkages compared with randomly selected pairs of sequences from the same population . Consensus MTBC sequences from epidemiologically linked individuals were phylogenetic nearest neighbours for each study . However, genetic distances between consensus sequences often exceeded the commonly used five-SNP and 12-SNP thresholds , for classifying isolates as potentially linked in transmission, with 20 (44·4%) of 45 household pairs not meeting a five-SNP threshold and seven (15·6%) household pairs not meeting a 12-SNP threshold . 11 (24·4%) isolate pairs from epidemiologically linked individuals were within a genetic distance of two SNPs or less, underscoring that genomic distances alone might be limited in their resolution. We detected a small, but measurable, minority variation above a 1% minor allele frequency threshold in routine, culture-based MTBC sequencing data, with a disproportionate number of minority variants occurring within the PE and PPE genes (mean 55·5 minority variants [24·8%] of 224·0 total minority variants in Colangeli et al; 27·1 [82·2%] of 33·0 in Guthrie et al; and 28·8 [80·1%] of 35·9 in Walker et al of all minority variants, across the studies; ). Outside of the PE and PPE genes, we found significant differences in minority variation detected across studies with the Colangeli et al study (mean 168·6 minority variants [95% CI 151·4–185·9]) identifying a higher level of minority variation than both the Guthrie et al study (5·8 [1·5–10·2]; Wilcoxon rank sum test p<0·0001) and the Walker et al study (7·1 [2·4–11·9], p<0·0001; ). A single isolate from an individual without household contacts had evidence of two co-infecting MTBC lineages in the Walker et al study. Most minority variants were in unique genomic locations and no minority variant was found in more than five samples in a single study . 964 (50·0%) of 1929 minority variants were predicted to be missense variants and only 25 (1·3%) minority variants were stop mutations, which would generate a truncated protein. However, the five most common minority variants across all three studies occurred in intergenic regions. Median depth of coverage was significantly correlated with the total number of minority variants detected outside the PE and PPE genes for the Walker et al study ( r =0·15, p=0·015), whereas no association was identified in the Colangeli et al ( r =−0·044, p=0·76) or Guthrie et al studies ( r =−0·024, p=0·91; ). Additionally, minor allele frequency was negatively correlated with site depth of coverage in the Colangeli et al ( r =−0·20, p<0·0001) and the Walker et al studies ( r =−0·31, p<0·0001), but not in the Guthrie et al study ( r =0·11, p=0·18; ), potentially indicating that both culture method and sequencing depth were responsible for the observed differences in recovered variation . Levels of minority variation within a sample were associated with MTBC lineage 2 isolates (OR 2·13 [95% CI 1·86–2·43]) and negatively associated with lineage 3 (0·38 [0·32–0·45]) and lineage 4 isolates (0·79 [0·69–0·90]), when compared with lineage 1 isolates, when also controlling for study and isolate median coverage. Isolates from household pairs shared more minority variants detected at a frequency of 1% or more and outside of PE and PPE genes than did randomly selected pairs of isolates: mean 97·7 (95% CI 79·1–116·3) shared minority variants in isolates from household pairs versus 9·8 (8·6–11·0) in isolates from randomly selected pairs in Colangeli et al; 0·8 (0·1–1·5) versus 0·2 (0·1–0·2) in Guthrie et al; and 0·7 (0·1–1·3) versus 0·2 (0·2–0·2) in Walker et al (all p<0·0001, Wilcoxon rank sum test; ; ). This effect rapidly declined as the definition of minority variant became more stringent . In each study, the distribution of shared minority variants differed significantly between epidemiologically unlinked and epidemiologically linked isolate pairs . In a general linear model, shared within-host variation with a frequency of 1% or more and outside of PE and PPE genes was significantly associated with household membership (OR 1·51 [95% CI 1·30–1·71], p<0·0001) for one standard deviation increase in shared minority variants. Genomic clustering, based on a standard 12-SNP clustering distance threshold, was also significantly associated with household membership (332 [147–913], p<0·0001). When applying a five-SNP clustering distance threshold, we observed a similar association of household membership with shared minority variants (1·52 [1·38–1·67], p<0·0001). We measured the performance of general linear models in classifying household pairs versus unlinked pairs with ROC curves. Including shared within-host variation improved the accuracy of predictions in all three studies as compared with a model without within-host variation (area under the ROC curve [AUC] 0·95 vs 0·92 for Colangeli et al, 0·99 vs 0·95 for Guthrie et al, and 0·93 vs 0·91 for Walker et al; ). A model including within-host variation independently of consensus sequence-based clustering resulted in AUCs of 0·69 (Colangeli et al), 0·64 (Guthrie et al), and 0·64 (Walker et al; ). To assess trade-offs in sensitivity and specificity in minority variant identification, we applied a series of increasingly conservative minor allele frequency thresholds, filtering variants detected at frequencies ranging from 0·05% to 50%. Maximum AUC for predicting household membership was 0·998 (minor allele frequency threshold: 2%) for the Colangeli et al study, 0·996 (threshold: 5%) for the Guthrie et al study, and 0·943 (threshold: 5%) for the Walker et al study . Among epidemiologically unlinked pairs, shared minority variants declined significantly with increased genetic distance between samples across all studies . For household pairs, we did not find a significant correlation between the genetic distance between isolate consensus sequences and number of shared minority variants in the Colangeli et al ( r =0·058, p=0·46) or Walker et al studies ( r =0·18, p=0·12; ), suggesting that this relationship might not be linear. However, we did find a positive correlation between genetic distance and shared minority variants the Guthrie et al study ( r =0·63, p=<0·0001), which was due to a single pair with a genetic distance of greater than 20 SNPs. Allele frequencies of shared minority variants with a frequency of 1% or more located outside of PE and PPE genes were correlated between isolates from household pairs in Colangeli et al (Pearson’s r =0·17, p<0·0001) and Guthrie et al ( r =0·94, p<0·0001), but not Walker et al . We predicted that sampling time might impact recovery of shared minority alleles because of changes in allele frequency between the time of sampling and time of transmission. In the Colangeli et al study, shared minority variation was negatively correlated ( r =−0.39, p=0·058) with time between collection of isolates from household index cases and household members; however, this finding was not significant . The other studies did not report sampling times. To maximise the epidemiological information gleaned from the continuous evolution of MTBC, approaches to leverage biological variation more fully are needed. Here, we found that (1) within-host MTBC variation appears to persist in sequence data from culture; (2) the magnitude of within-host variation varies between and within studies and is affected by methodological choices, lineage, or both; and (3) MTBC isolates from epidemiologically linked individuals share higher levels of variation than do unlinked individuals and shared within-host variation improves predictions of epidemiological linkage. Our results suggest that minority variation could contribute epidemiological information to transmission inferences, improving inferences from consensus sequences, and that alternative approaches to culture-based sequencing might further contribute to this observed epidemiological signal. As sequencing has become more efficient and less expensive, pathogen genomic studies have begun to describe previously uncharacterised levels of minority variation within individual hosts and shared between transmission pairs. For example, MTBC within-host variation has been used to reveal an undetected superspreader in a single large outbreak in the Canadian Arctic, shared patterns of co-infection in an outbreak in the Colombian Amazon, shared patterns of variation in a previously described compensatory mutation in Paraguay, and shared minority variants among epidemiologically linked individuals in Spain. The existence of shared minority variants suggests that variation present in a donor’s infection persists through transmission and is maintained within the recipient through population changes and immune pressures. Recently developed transmission inference approaches include pathogen within-host diversity to infer transmission events, but are not frequently applied to MTBC, which is unique in its slow substitution rate and long and variable periods of latent infection. Characterising within-host variation can also illuminate evolutionary processes—for example, parallel evolution and within-host adaptation of Mycobacteroides abcessus in longitudinally sampled patients. In the present study, we quantified minority variants identified by a standard variant calling pipeline indicating that new pipelines are not required to harness this level of pathogen variation. Future work is needed to develop automated, user-friendly pipelines for transmission and phylogenetic inference that include both fixed genomic differences and within-host variation. A major challenge in pathogen genomics, including studies of within-host pathogen variation, is in distinguishing true biological variation from noise introduced by sequencing, bioinformatic analysis, or other errors. Often, pathogen genomic approaches err on the side of specificity and impose conservative variant filters. Our findings here and previously suggest that for studying transmission linkages, including low frequency minority variants could improve predictions of transmission linkage, although it is possible that some of the minority variants within individual samples and shared across samples are artifacts. Our findings underscore the further work needed to optimise approaches for highly accurate identification of both within-host and genome-wide variation. For example, because of limited variation observed in transmission clusters, there has been interest in using PE and PPE genes as an additional source of genetic variation. Our observation that minority variants are concentrated in PE and PPE genes highlights the need for testing whether long read sequencing or alternative mapping approaches can improve the accuracy of variant identification in this highly variable region. , Work published in the past 5 years showed that pathogen enrichment approaches—through either host DNA depletion or pathogen DNA enrichment—can allow MTBC sequencing directly from clinical samples, bypassing the need for culture. , Sequencing from positive liquid broth culture of specimens might be an intermediate step to improve detection of within-host variation. There are several limitations to our study. First, we conducted a reanalysis of previously published sequence data from clinical MTBC samples. We therefore do not have information about the true biological variation present within samples and cannot assess sensitivity and specificity of variants identified using alternative approaches. Experiments that directly compare recovery of minority variants in known strain mixtures are required. Second, we found substantially higher within-host variation in one study (Colangeli et al ) than in the other two (Guthrie et al and Walker et al ), probably reflecting large differences in study design and sample preparation. The Colangeli et al study was prospective, and included three loops of culture for DNA extractions, whereas the Guthrie et al and Walker et al studies were retrospective and re-cultured isolates after frozen storage. The observed difference in within-host variation between studies could also reflect higher population-wide MTBC diversity circulating in a higher-incidence setting (Brazil vs Canada and England). Future work, including a larger number of studies, is needed to identify factors associated with recovered within-host variation—including steps in MTBC sampling, sampling time, culture, laboratory preparation, or sequencing that might have influenced recovered within-host variation. Third, we considered household transmission pairs as our reference standard for transmission linkages. Although the studies we included employed additional filters to exclude household pairs unlikely to be epidemiologically linked, these are imperfect reference standards, and it is possible that these pairs are misclassified. It is also possible that transmission outside of households resulted in undocumented epidemiological linkages. However, the impact of such mis-classifications would be to bias our results towards the null finding that shared minority variants are not more likely to be found in transmission pairs than unlinked pairs. Fourth, we do not have access to sequencing replicates of the same sputum culture or biological replicates of the same sputum to quantify the concordance of minority variants across sequencing or biological replicates. Fifth, here, we cannot differentiate between de novo variation that accumulated in culture versus variation present in the sputum, although our findings of shared variation among transmission pairs suggest that some variants were present in sputum. Finally, we took a reference-based approach to identify minority variants, which might underestimate true levels of minority variation present within individual infections and shared across infections. Our findings of within-host variation present in cultured MTBC samples suggest that within-host MTBC variation could augment routine transmission inferences. More broadly, these finding suggest that assessing MTBC variation, including within-host variants, in addition to genome-wide variants and indels might improve both transmission and phylogenetic inferences. 1
Proteomics and metabolomics analyses of mechanism underlying bovine sperm cryoinjury
ef9bb227-3801-4cf2-9017-f557ef0d9ebb
11755957
Biochemistry[mh]
AI technology can effectively promote bovine reproductive ability. Semen cryopreservation further ensures that AI is not limited by geography, time, and space. It can maximize the utilization efficiency and breeding rate of superior sires, possessing enormous economic value. However, the prominent problems of low sperm motility and short lifespan caused by cryoinjury after thawing have led to around 40–70% in bovine after AI , reducing the efficiency of AI in bovine production. The frozen-thawed process of semen inevitably accompanies cryoinjury, which is attributed to extreme osmotic changes, cold shock, intracellular ice crystal formation, excessive production of ROS, and imbalance of the antioxidant defense system . These processes ultimately induce the disruption of sperm morphology and physiological functions. The decrease in sperm motility parameters is a significant indicator of the deterioration in the fertilization ability of sperm after cryopreservation . However, these parameters cannot explain the molecular mechanism involved in the biochemical and physiological changes of sperm during the frozen-thawed process . Mature sperm are terminal and highly differentiated cells without transcription and translation functions, coupled with their abundant, highly specialized, and partitioned characteristics, making proteomics analysis a useful approach . In the past decade, sperm proteomics analysis has gradually become a new strategy for searching for frost-resistance biomarkers of semen. Some studies compared the sperm of cows, goats, sheep, and horses before and after freezing using various proteomics techniques, and multiple protein markers related to motility and frost resistance were identified . The dynamic relationship between proteins and metabolites allows the biological system to function as a cohesive unit. Metabolites are indispensable in the biochemical environment as they are the primary components of all protein biochemical structures . Metabolomics is a key scientific field in the post-genomic era, which investigates small molecules to supplement genomics, transcriptomics, and proteomics and helps to identify new disease biomarkers and treatment strategies . Metabolomics analysis has been applied in semen cryoinjury in yak, sheep, and pig to determine metabolic markers relevant to frost resistance in sperm or seminal plasma . However, most studies merely focus on valuable markers related to the viability of frozen semen, with little attention to the mechanism underlying sperm death during freezing. The main focus of this investigation is the cause of the simultaneous appearance of live and dead sperm in the same semen sample when frozen under the same circumstances. Therefore, the current authors consider that it is essential to analyze the cryoinjury mechanism by examining the differences between live and dead sperm in frozen-thawed semen. The Percoll gradient centrifugation method with different concentrations can separate live and dead sperm from frozen-thawed semen, with LMS and HMS as the characteristics of the low-density group and the high-density group, respectively . Reports on the mechanism of semen cryoinjury using this method are only concerned with yak and cow. They identified multiple markers related to sperm motility through the proteomics analysis of HMS and LMS separated from frozen-thawed semen . However, further explorations of the biological mechanism underlying this phenomenon during semen freezing are not conclusive. Based on the above facts, this study evaluated the damaging effect of freezing on sperm by measuring the motion parameters, intracellular ROS concentration, MMP, and ATP concentration of HMS and LMS isolated from bovine frozen-thawed sperm. The mechanism of cryoinjury affecting sperm motility was disclosed using 4D-label free quantitative proteomics and untargeted metabolomics. Percoll separation of frozen-thawed sperm After thawing, the semen was placed on the upper layer of 90 − 45% Percoll and underwent centrifuge to separate HMS and LMS. CASA was used to evaluate the motion parameters of separated sperm. Compared with the LMS collected at the 45–90% interface, all motion parameters indicators collected at the 90% interface in HMS were superior ( P < 0.05) (Table 1). Through principal component analysis (PCA), sperm dynamics parameters were simplified into two variables to reflect velocity and linearity, respectively (Fig. A). The principal component diagram depicts that the samples are well separated (Fig. B). The significant difference in HMS and LMS motion parameters provides a basis for the accuracy of proteomics and metabolomics analysis. Data are expressed as the mean ± standard error of the mean. VCL: curvilinear velocity, VSL: straight line velocity, VAP: average path velocity, BCF: beat-cross frequency, ALH: amplitude of lateral head displacement, STR: straightness (VSL/VAP), LIN: linearity (VSL/VCL), WOB: wobble (VAP/VCL). The relationship between the motility of frozen-thawed sperm and ROS The results of flow cytometry analysis show that the ROS content in HMS is evidently lower than that in LMS (Fig. A) ( P < 0.05), and the motility of frozen sperm has a negative correlation with ROS (Table ). The effect of frozen sperm motility on mitochondrial membrane potential and ATP levels The fluorescence of MMP is shown in Fig. C. Red/orange represents high potential, and green denotes low potential. According to flow cytometry analysis, the MMP in HMS is significantly higher than that in LMS ( P < 0.05), indicating a positive correlation between mitochondrial activity and sperm motility (Table ). ATP is crucial for maintaining sperm motility and movement. The results exhibit that ATP in HMS is profoundly higher than in LSM (Fig. B) ( P < 0.05), implying a positive correlation between ATP and sperm motility (Table ). Proteomics analysis Global proteomics changes in bovine sperm cryoinjury This study described how changes in bovine sperm motility caused by deep-frozen affect protein abundance. A total of 17,707 peptide segments and 2,465 proteins are identified in bovine sperm, of which 2,403 were quantified (97.4%) (Table ). The quality control analysis of the proteome was also assessed including peptide lengths (Figure A) and peptide distributions (Figure B). Moreover, PCA was used to investigate protein expression patterns. The results demonstrate that HMS and LMS samples have good clustering performance (Figure C), representing that the motility of frozen-thawed semen has a significant impact on protein expression patterns. Next, this study combined multiple changes in abundance (greater than 1.5) and the P -value < 0.05 to compare the abundance of HMS and LMS proteins. In contrast to LMS, HMS has 106 proteins with high abundance expression and 79 proteins with low abundance expression (Fig. A Table ). Hierarchical cluster analysis was also conducted for DEPs, the result of which were illustrated by a heat map (Fig. B), this demonstrated changes in proteomes of HMS and LMS. Functional analysis of differentially expressed proteins To determine the potential biological role of DEPs in frozen-thawed bovine sperm with high and low motility, functional classification was performed on these proteins using multiple bioinformatics analysis methods. According to the three classifications of GO (biological process, cellular component, and molecular function), functional analysis was conducted on DEPs. (1) Biological process: Upregulated proteins mainly involve the metabolic process, glycolysis process, fertilization, and cell redox homeostasis. Downregulated proteins are related to protein folding and response to endoplasmic reticulum stress. (2) Cellular composition: Upregulated proteins concern cytoplasm, cytosol, and cilium. Downregulated proteins involve cytoplasm, endoplasmic reticulum, and supramolecular complex (such as endoplasmic reticulum lumen, endoplasmic reticulum subcompartment, and endoplasmic reticulum membrane), indicating a correlation between endoplasmic reticulum and sperm motility. (3) Molecular function: Upregulated proteins are mainly relevant to antioxidant activity, kinase activity, and oxidoreductase activity. Downregulated proteins primarily involve purine ribonucleoside triphosphate binding and protein folding chaperones (Fig. C, D, Table , ). Based on the KEGG database, KEGG enrichment pathway analysis was performed on the DEPs of HMS and LMS in bovine frozen-thawed sperm to obtain potential signalling pathways. In this study, upregulated proteins are significantly enriched in 31 signalling pathways, including metabolism, glycolysis/gluconeogenesis, and the PPAR signalling pathway (Fig. E, Table ). Downregulated proteins are significantly enriched in 16 signalling pathways, involving the protein processing in endoplasmic reticulum, Ras signalling pathway, and apoptosis (Fig. F, Table ). GSEA analysis Since GO and KEGG enrichment analyses of signalling pathways require screening for DEPs first, which is based on a specific degree of fold change and significance analysis, the biological results have some limitations. Therefore, the GSEA enrichment analysis was further conducted. Different from Go and KEGG analyses, enrichment analysis is performed on all expressed proteins and determines the effects of some gene sets on biological processes based on protein expression levels. Meanwhile, it can predict whether enriched signalling pathways are activated or inhibited in biological processes. Therefore, GSEA enrichment analysis was adopted in this study on all quantitative proteins to validate signalling pathways that changed in the motility-related proteome in frozen-thawed semen. The results of the GSEA analysis reveal that 12 gene sets related to signalling pathways are upregulated in HMS, including the activation of metabolic pathways, glycolysis/gluconeogenesis, and the cAMP signalling pathway gene sets. In LMS, seven gene sets related to signalling pathways are upregulated, mainly involving the activation of the gene set of apoptosis (Fig. , Table ). PPI analysis To investigate the interactions between DEPs of HMS and LMS separated from bovine frozen-thawed sperm and their involvement in the cross-linking of various biological networks, PPI analysis was performed on the DEPs using a string database. The results show that the interactions between DEPs present high complexity and are closely linked. Compared to LMS, most proteins are highly expressed in HMS. Differential proteins identified were classified based on known biological functions. It is found that they mainly participate in metabolic processes and proteolysis, which play a vital role in fertilization. Among them, proteins with more interaction nodes contain PARK7, PGK1 and PRDX6, representing their significant effect on regulating the motility of frozen-thawed bovine sperm (Fig. ). Western blot validation To verify the results of the above quantitative proteomics analysis, this study selected two proteins PARK7 and TPPP2, and identified the abundance of their HMS and LMS isolated from frozen-thawed sperm through the western immunoblotting method. The results disclose that PARK7 and TPPP2 have higher abundance in HMS, which agrees with the results of 4D proteomics analysis (Fig. A), implying that the proteomics data in this study are accurate and reliable. Immunofluorescence localization of PARK7 and TPPP2 The immunofluorescence method was adopted to observe the positions of PARK7 and TPPP2 proteins in HMS and LMS separated from bovine frozen-thawed sperm. PARK7 is mainly located in the posterior region of the sperm head in HMS and the acrosome and the posterior region of the sperm head in LMS, with a significantly diminished expression level. TPPP2 is located in the acrosome and flagella in HMS, while in LMS, it only exists in the acrosome, and its expression level remarkably drops (Fig. B). These results indicate that the motility of bovine frozen-thawed sperm is related to the expression level and localization of PARK7 and TPPP2. Metabolomics analysis Identification and classification of metabolites To reveal the molecular mechanism of sperm motility diminution during freezing, untargeted metabolomics was used to investigate the metabolic differences between HMS and LMS isolated from bovine frozen-thawed sperm. A total of 4,135 metabolites are identified, of which 2,484 are in the positive ion mode and 1,651 are in the negative ion mode (Table ). The OPLS-DA analysis was carried out on HMS and LMS samples to eliminate irrelevant differences and differentiate between them. As shown in the OPLS-DA scoring table (Figure A, Figure B), samples in the same group are relatively clustered, and samples from different groups are evidently dispersed, denoting good repeatability within the same group and metabolic differences between groups. To evaluate the predictability and reliability of the OPLS-DA model, seven cross-validation and 200 response ranking tests were conducted. The regression line of Q2 is always lower than that of R2, and its intercept with the y-axis is less than zero (Figure C, Figure D), indicating that the model is reliable and there is no overfitting. Therefore, the obtained VIP values can be used to screen for DEMs. This study conducted a pooled analysis of positive and negative ion patterns. Different levels of metabolites may contribute to the changes in sperm motility after freezing and thawing. A total of 329 DEMs are identified in HMS, composed of 106 upregulated and 223 downregulated (Fig. A, Table ), mainly including benzene and substituted derivatives, carboxylic acids and derivatives, fatty acyls, glycerophospholipids, organooxygen compounds, prenol lipids, and steroids and steroid derivatives. The cluster analysis was employed to explore the accumulation of DEMs in sperm. The results present that after sperm freezing, the concentration of most metabolites elevates with sperm motility diminution (Fig. B), demonstrating that sperm cryoinjury produces more metabolites. KEGG analysis of sperm metabolites The metabolites differentially expressed in HME and LMS are significantly enriched in the KEGG pathway. KEGG analysis shows that 25 signalling pathways are significantly enriched in upregulated DEMs, mainly involving the cAMP s signalling pathway, mTOR signalling pathway, and pyruvate metabolism (Fig. C, Table ). Eleven signalling pathways are significantly enriched in downregulated metabolites, mainly related to metabolic pathways (Fig. D, Table ). Integrated analysis of proteomics and metabolomics This study summarized the potential challenges during the process of freezing and thawing of bovine sperm using proteomics and metabolomics data combined with physiological indicators (Fig. ). The freezing and thawing process generates superfluous ROS, leading to severe oxidative stress in the sperm. Correspondingly, oxidative stress affects mitochondrial integrity and energy production, which mainly occurs in the middle of the sperm and damages sperm motility. Autophagy is subsequently triggered, resulting in sperm apoptosis. In addition, the inhibition of glycolysis (the ATP production pathway) and cAMP is another reason for insufficient energy and reduced motility during sperm freezing (Fig. ). After thawing, the semen was placed on the upper layer of 90 − 45% Percoll and underwent centrifuge to separate HMS and LMS. CASA was used to evaluate the motion parameters of separated sperm. Compared with the LMS collected at the 45–90% interface, all motion parameters indicators collected at the 90% interface in HMS were superior ( P < 0.05) (Table 1). Through principal component analysis (PCA), sperm dynamics parameters were simplified into two variables to reflect velocity and linearity, respectively (Fig. A). The principal component diagram depicts that the samples are well separated (Fig. B). The significant difference in HMS and LMS motion parameters provides a basis for the accuracy of proteomics and metabolomics analysis. Data are expressed as the mean ± standard error of the mean. VCL: curvilinear velocity, VSL: straight line velocity, VAP: average path velocity, BCF: beat-cross frequency, ALH: amplitude of lateral head displacement, STR: straightness (VSL/VAP), LIN: linearity (VSL/VCL), WOB: wobble (VAP/VCL). The results of flow cytometry analysis show that the ROS content in HMS is evidently lower than that in LMS (Fig. A) ( P < 0.05), and the motility of frozen sperm has a negative correlation with ROS (Table ). The fluorescence of MMP is shown in Fig. C. Red/orange represents high potential, and green denotes low potential. According to flow cytometry analysis, the MMP in HMS is significantly higher than that in LMS ( P < 0.05), indicating a positive correlation between mitochondrial activity and sperm motility (Table ). ATP is crucial for maintaining sperm motility and movement. The results exhibit that ATP in HMS is profoundly higher than in LSM (Fig. B) ( P < 0.05), implying a positive correlation between ATP and sperm motility (Table ). Global proteomics changes in bovine sperm cryoinjury This study described how changes in bovine sperm motility caused by deep-frozen affect protein abundance. A total of 17,707 peptide segments and 2,465 proteins are identified in bovine sperm, of which 2,403 were quantified (97.4%) (Table ). The quality control analysis of the proteome was also assessed including peptide lengths (Figure A) and peptide distributions (Figure B). Moreover, PCA was used to investigate protein expression patterns. The results demonstrate that HMS and LMS samples have good clustering performance (Figure C), representing that the motility of frozen-thawed semen has a significant impact on protein expression patterns. Next, this study combined multiple changes in abundance (greater than 1.5) and the P -value < 0.05 to compare the abundance of HMS and LMS proteins. In contrast to LMS, HMS has 106 proteins with high abundance expression and 79 proteins with low abundance expression (Fig. A Table ). Hierarchical cluster analysis was also conducted for DEPs, the result of which were illustrated by a heat map (Fig. B), this demonstrated changes in proteomes of HMS and LMS. Functional analysis of differentially expressed proteins To determine the potential biological role of DEPs in frozen-thawed bovine sperm with high and low motility, functional classification was performed on these proteins using multiple bioinformatics analysis methods. According to the three classifications of GO (biological process, cellular component, and molecular function), functional analysis was conducted on DEPs. (1) Biological process: Upregulated proteins mainly involve the metabolic process, glycolysis process, fertilization, and cell redox homeostasis. Downregulated proteins are related to protein folding and response to endoplasmic reticulum stress. (2) Cellular composition: Upregulated proteins concern cytoplasm, cytosol, and cilium. Downregulated proteins involve cytoplasm, endoplasmic reticulum, and supramolecular complex (such as endoplasmic reticulum lumen, endoplasmic reticulum subcompartment, and endoplasmic reticulum membrane), indicating a correlation between endoplasmic reticulum and sperm motility. (3) Molecular function: Upregulated proteins are mainly relevant to antioxidant activity, kinase activity, and oxidoreductase activity. Downregulated proteins primarily involve purine ribonucleoside triphosphate binding and protein folding chaperones (Fig. C, D, Table , ). Based on the KEGG database, KEGG enrichment pathway analysis was performed on the DEPs of HMS and LMS in bovine frozen-thawed sperm to obtain potential signalling pathways. In this study, upregulated proteins are significantly enriched in 31 signalling pathways, including metabolism, glycolysis/gluconeogenesis, and the PPAR signalling pathway (Fig. E, Table ). Downregulated proteins are significantly enriched in 16 signalling pathways, involving the protein processing in endoplasmic reticulum, Ras signalling pathway, and apoptosis (Fig. F, Table ). GSEA analysis Since GO and KEGG enrichment analyses of signalling pathways require screening for DEPs first, which is based on a specific degree of fold change and significance analysis, the biological results have some limitations. Therefore, the GSEA enrichment analysis was further conducted. Different from Go and KEGG analyses, enrichment analysis is performed on all expressed proteins and determines the effects of some gene sets on biological processes based on protein expression levels. Meanwhile, it can predict whether enriched signalling pathways are activated or inhibited in biological processes. Therefore, GSEA enrichment analysis was adopted in this study on all quantitative proteins to validate signalling pathways that changed in the motility-related proteome in frozen-thawed semen. The results of the GSEA analysis reveal that 12 gene sets related to signalling pathways are upregulated in HMS, including the activation of metabolic pathways, glycolysis/gluconeogenesis, and the cAMP signalling pathway gene sets. In LMS, seven gene sets related to signalling pathways are upregulated, mainly involving the activation of the gene set of apoptosis (Fig. , Table ). PPI analysis To investigate the interactions between DEPs of HMS and LMS separated from bovine frozen-thawed sperm and their involvement in the cross-linking of various biological networks, PPI analysis was performed on the DEPs using a string database. The results show that the interactions between DEPs present high complexity and are closely linked. Compared to LMS, most proteins are highly expressed in HMS. Differential proteins identified were classified based on known biological functions. It is found that they mainly participate in metabolic processes and proteolysis, which play a vital role in fertilization. Among them, proteins with more interaction nodes contain PARK7, PGK1 and PRDX6, representing their significant effect on regulating the motility of frozen-thawed bovine sperm (Fig. ). This study described how changes in bovine sperm motility caused by deep-frozen affect protein abundance. A total of 17,707 peptide segments and 2,465 proteins are identified in bovine sperm, of which 2,403 were quantified (97.4%) (Table ). The quality control analysis of the proteome was also assessed including peptide lengths (Figure A) and peptide distributions (Figure B). Moreover, PCA was used to investigate protein expression patterns. The results demonstrate that HMS and LMS samples have good clustering performance (Figure C), representing that the motility of frozen-thawed semen has a significant impact on protein expression patterns. Next, this study combined multiple changes in abundance (greater than 1.5) and the P -value < 0.05 to compare the abundance of HMS and LMS proteins. In contrast to LMS, HMS has 106 proteins with high abundance expression and 79 proteins with low abundance expression (Fig. A Table ). Hierarchical cluster analysis was also conducted for DEPs, the result of which were illustrated by a heat map (Fig. B), this demonstrated changes in proteomes of HMS and LMS. To determine the potential biological role of DEPs in frozen-thawed bovine sperm with high and low motility, functional classification was performed on these proteins using multiple bioinformatics analysis methods. According to the three classifications of GO (biological process, cellular component, and molecular function), functional analysis was conducted on DEPs. (1) Biological process: Upregulated proteins mainly involve the metabolic process, glycolysis process, fertilization, and cell redox homeostasis. Downregulated proteins are related to protein folding and response to endoplasmic reticulum stress. (2) Cellular composition: Upregulated proteins concern cytoplasm, cytosol, and cilium. Downregulated proteins involve cytoplasm, endoplasmic reticulum, and supramolecular complex (such as endoplasmic reticulum lumen, endoplasmic reticulum subcompartment, and endoplasmic reticulum membrane), indicating a correlation between endoplasmic reticulum and sperm motility. (3) Molecular function: Upregulated proteins are mainly relevant to antioxidant activity, kinase activity, and oxidoreductase activity. Downregulated proteins primarily involve purine ribonucleoside triphosphate binding and protein folding chaperones (Fig. C, D, Table , ). Based on the KEGG database, KEGG enrichment pathway analysis was performed on the DEPs of HMS and LMS in bovine frozen-thawed sperm to obtain potential signalling pathways. In this study, upregulated proteins are significantly enriched in 31 signalling pathways, including metabolism, glycolysis/gluconeogenesis, and the PPAR signalling pathway (Fig. E, Table ). Downregulated proteins are significantly enriched in 16 signalling pathways, involving the protein processing in endoplasmic reticulum, Ras signalling pathway, and apoptosis (Fig. F, Table ). Since GO and KEGG enrichment analyses of signalling pathways require screening for DEPs first, which is based on a specific degree of fold change and significance analysis, the biological results have some limitations. Therefore, the GSEA enrichment analysis was further conducted. Different from Go and KEGG analyses, enrichment analysis is performed on all expressed proteins and determines the effects of some gene sets on biological processes based on protein expression levels. Meanwhile, it can predict whether enriched signalling pathways are activated or inhibited in biological processes. Therefore, GSEA enrichment analysis was adopted in this study on all quantitative proteins to validate signalling pathways that changed in the motility-related proteome in frozen-thawed semen. The results of the GSEA analysis reveal that 12 gene sets related to signalling pathways are upregulated in HMS, including the activation of metabolic pathways, glycolysis/gluconeogenesis, and the cAMP signalling pathway gene sets. In LMS, seven gene sets related to signalling pathways are upregulated, mainly involving the activation of the gene set of apoptosis (Fig. , Table ). To investigate the interactions between DEPs of HMS and LMS separated from bovine frozen-thawed sperm and their involvement in the cross-linking of various biological networks, PPI analysis was performed on the DEPs using a string database. The results show that the interactions between DEPs present high complexity and are closely linked. Compared to LMS, most proteins are highly expressed in HMS. Differential proteins identified were classified based on known biological functions. It is found that they mainly participate in metabolic processes and proteolysis, which play a vital role in fertilization. Among them, proteins with more interaction nodes contain PARK7, PGK1 and PRDX6, representing their significant effect on regulating the motility of frozen-thawed bovine sperm (Fig. ). To verify the results of the above quantitative proteomics analysis, this study selected two proteins PARK7 and TPPP2, and identified the abundance of their HMS and LMS isolated from frozen-thawed sperm through the western immunoblotting method. The results disclose that PARK7 and TPPP2 have higher abundance in HMS, which agrees with the results of 4D proteomics analysis (Fig. A), implying that the proteomics data in this study are accurate and reliable. The immunofluorescence method was adopted to observe the positions of PARK7 and TPPP2 proteins in HMS and LMS separated from bovine frozen-thawed sperm. PARK7 is mainly located in the posterior region of the sperm head in HMS and the acrosome and the posterior region of the sperm head in LMS, with a significantly diminished expression level. TPPP2 is located in the acrosome and flagella in HMS, while in LMS, it only exists in the acrosome, and its expression level remarkably drops (Fig. B). These results indicate that the motility of bovine frozen-thawed sperm is related to the expression level and localization of PARK7 and TPPP2. Identification and classification of metabolites To reveal the molecular mechanism of sperm motility diminution during freezing, untargeted metabolomics was used to investigate the metabolic differences between HMS and LMS isolated from bovine frozen-thawed sperm. A total of 4,135 metabolites are identified, of which 2,484 are in the positive ion mode and 1,651 are in the negative ion mode (Table ). The OPLS-DA analysis was carried out on HMS and LMS samples to eliminate irrelevant differences and differentiate between them. As shown in the OPLS-DA scoring table (Figure A, Figure B), samples in the same group are relatively clustered, and samples from different groups are evidently dispersed, denoting good repeatability within the same group and metabolic differences between groups. To evaluate the predictability and reliability of the OPLS-DA model, seven cross-validation and 200 response ranking tests were conducted. The regression line of Q2 is always lower than that of R2, and its intercept with the y-axis is less than zero (Figure C, Figure D), indicating that the model is reliable and there is no overfitting. Therefore, the obtained VIP values can be used to screen for DEMs. This study conducted a pooled analysis of positive and negative ion patterns. Different levels of metabolites may contribute to the changes in sperm motility after freezing and thawing. A total of 329 DEMs are identified in HMS, composed of 106 upregulated and 223 downregulated (Fig. A, Table ), mainly including benzene and substituted derivatives, carboxylic acids and derivatives, fatty acyls, glycerophospholipids, organooxygen compounds, prenol lipids, and steroids and steroid derivatives. The cluster analysis was employed to explore the accumulation of DEMs in sperm. The results present that after sperm freezing, the concentration of most metabolites elevates with sperm motility diminution (Fig. B), demonstrating that sperm cryoinjury produces more metabolites. KEGG analysis of sperm metabolites The metabolites differentially expressed in HME and LMS are significantly enriched in the KEGG pathway. KEGG analysis shows that 25 signalling pathways are significantly enriched in upregulated DEMs, mainly involving the cAMP s signalling pathway, mTOR signalling pathway, and pyruvate metabolism (Fig. C, Table ). Eleven signalling pathways are significantly enriched in downregulated metabolites, mainly related to metabolic pathways (Fig. D, Table ). Integrated analysis of proteomics and metabolomics This study summarized the potential challenges during the process of freezing and thawing of bovine sperm using proteomics and metabolomics data combined with physiological indicators (Fig. ). The freezing and thawing process generates superfluous ROS, leading to severe oxidative stress in the sperm. Correspondingly, oxidative stress affects mitochondrial integrity and energy production, which mainly occurs in the middle of the sperm and damages sperm motility. Autophagy is subsequently triggered, resulting in sperm apoptosis. In addition, the inhibition of glycolysis (the ATP production pathway) and cAMP is another reason for insufficient energy and reduced motility during sperm freezing (Fig. ). To reveal the molecular mechanism of sperm motility diminution during freezing, untargeted metabolomics was used to investigate the metabolic differences between HMS and LMS isolated from bovine frozen-thawed sperm. A total of 4,135 metabolites are identified, of which 2,484 are in the positive ion mode and 1,651 are in the negative ion mode (Table ). The OPLS-DA analysis was carried out on HMS and LMS samples to eliminate irrelevant differences and differentiate between them. As shown in the OPLS-DA scoring table (Figure A, Figure B), samples in the same group are relatively clustered, and samples from different groups are evidently dispersed, denoting good repeatability within the same group and metabolic differences between groups. To evaluate the predictability and reliability of the OPLS-DA model, seven cross-validation and 200 response ranking tests were conducted. The regression line of Q2 is always lower than that of R2, and its intercept with the y-axis is less than zero (Figure C, Figure D), indicating that the model is reliable and there is no overfitting. Therefore, the obtained VIP values can be used to screen for DEMs. This study conducted a pooled analysis of positive and negative ion patterns. Different levels of metabolites may contribute to the changes in sperm motility after freezing and thawing. A total of 329 DEMs are identified in HMS, composed of 106 upregulated and 223 downregulated (Fig. A, Table ), mainly including benzene and substituted derivatives, carboxylic acids and derivatives, fatty acyls, glycerophospholipids, organooxygen compounds, prenol lipids, and steroids and steroid derivatives. The cluster analysis was employed to explore the accumulation of DEMs in sperm. The results present that after sperm freezing, the concentration of most metabolites elevates with sperm motility diminution (Fig. B), demonstrating that sperm cryoinjury produces more metabolites. The metabolites differentially expressed in HME and LMS are significantly enriched in the KEGG pathway. KEGG analysis shows that 25 signalling pathways are significantly enriched in upregulated DEMs, mainly involving the cAMP s signalling pathway, mTOR signalling pathway, and pyruvate metabolism (Fig. C, Table ). Eleven signalling pathways are significantly enriched in downregulated metabolites, mainly related to metabolic pathways (Fig. D, Table ). This study summarized the potential challenges during the process of freezing and thawing of bovine sperm using proteomics and metabolomics data combined with physiological indicators (Fig. ). The freezing and thawing process generates superfluous ROS, leading to severe oxidative stress in the sperm. Correspondingly, oxidative stress affects mitochondrial integrity and energy production, which mainly occurs in the middle of the sperm and damages sperm motility. Autophagy is subsequently triggered, resulting in sperm apoptosis. In addition, the inhibition of glycolysis (the ATP production pathway) and cAMP is another reason for insufficient energy and reduced motility during sperm freezing (Fig. ). Semen cryopreservation can cause increased fluidity and permeability of the sperm plasma membrane, downgraded acrosome integrity, abnormal flagella, mitochondrial damage, and oxidative stress induced by increased reactive oxygen species. These changes will alter the structure of lipids and proteins, lower sperm motility, and exacerbate sperm DNA fragmentation, resulting in declined quality and fertilization rate of frozen-thawed sperm . Current studies on the mechanism of semen cryoinjury mostly focus on the biological changes between fresh sperm and frozen-thawed sperm and promote the quality of frozen-thawed sperm by adding antioxidant-active substances . However, during the process of freezing and thawing, surviving spermatozoa undergo prominent modifications, including their structure and physiological status . Therefore, the authors believe that investigating surviving and dead sperm after the frozen-thawed process is valuable for improving sperm quality, as they are subjected to the same freezing process. By analyzing the motion parameters of HMS and LMS isolated from 90 − 45% Percoll in bovine frozen-thawed sperm, this study found that all indicators in HMS were significantly higher than those in LMS, such as total mobility, progressive mobility, VCL, VSL, and VAP. It confirms that this study can separate surviving and dead sperm during the freezing and thawing process through Percoll, providing reliable support for probing the biological mechanism of bovine semen cryoinjury. Since fertility is a multi-parameter process, a single parameter of sperm quality is insufficient to evaluate the overall fertility potential of semen samples. Some potential protein and metabolite biomarkers about motility decrease cryopreservation sperm were identified in multiple mammalian species such as ram and boar . Therefore, the proteomics and metabolomics are a promising strategy for identifying potential biomarkers of bovine sperm motility decrease caused by cryoinjury and understanding the respective biologic functions. Identification and analysis of proteins and metabolites in bovine sperm cryoinjury Applying the 4D-label-free technology, this study identified and quantified 2,403 proteins in bovine sperm, of which 106 were upregulated in HMS and 79 were downregulated. Their differences may be a factor for sperm motility decreased. The results represent that some proteins with antioxidant capacity, such as PARK7 and PRDX6, are highly expressed in HMS. They can retain sperm vitality by clearing ROS produced during sperm freezing. Moreover, metabolic pathways related to energy metabolism, especially proteins involved in glycolysis, are highly expressed in HMS. They can generate ATP in sperm through metabolism without producing ROS, which meets the energy needs of sperm while preventing oxidative stress . Therefore, this study speculates that the loss of proteins that maintain these functions during the freezing process of bovine sperm may induce sperm motility diminution. It is worth noting that signalling pathways of spermatogenesis and structural reorganization, such as those related to chaperone complexes and protein folding, are specifically identified in bovine sperm and negatively correlated with fertilization rate . The endoplasmic reticulum plays a crucial role in the folding and assembly of newly synthesized proteins in mammalian cells. However, it is eliminated during spermatogenesis. A highly active protein synthesis and folding event occurs before spermatogenesis is completed . Therefore, this study believes that the activation of protein folding and endoplasmic reticulum pathways in LMS may reflect abnormal protein folding during sperm freezing, ultimately affecting the low viability of frozen-thawed bovine sperm. Through the untargeted metabolomics method, this study identified 4,135 metabolites in bovine sperm, of which 106 were upregulated in HMS and 223 were downregulated. These are the most identified metabolites in bovine sperm so far, laying the foundation for future research on the metabolic function of bovine sperm . DEMs analysis reveals that metabolites related to the cAMP signalling pathway and pyruvate metabolism are highly expressed in HMS, indicating that metabolites and proteins in sperm interact with metabolic pathways to generate ATP and sustain sperm energy supply. Fan et al. discovered that adding galactose during freezing semen could increase ATP levels by augmenting AKR1B1 protein expression, thereby enhancing frozen-thawed sperm motility . This indicates that combining proteomics with metabolomics to investigate the disequilibrium of metabolic pathways in bovine sperm cryoinjury is essential to adjust the sperm ATP synthesis pathway. ROS generated during the frozen-thawed process induces oxidative stress in sperm The imbalance between the antioxidant defense system and ROS production in sperm cells during cryopreservation leads to oxidative stress . This study speculated that oxidative stress during the frozen-thawed process of bovine sperm might be the main cause of sperm motility diminution. To verify this assumption, this study detected the ROS content in HMS and LMS isolated from bovine frozen-thawed sperm. The results exhibit that the high content of ROS in LMS greatly impairs sperm motility. Meanwhile, this study conducted proteomics analysis on HMS and LMS. It is found that proteins related to oxidative stress (PRDX6 and PARK7) are significantly downregulated in LMS. The reduction in the expression of these proteins indicates that the antioxidant system has been damaged in LMS, producing more ROS. Shi et al. reported similar findings when simulating oxidative stress by supplementing exogenous H 2 O 2 to sperm and freezing semen . PARK7 mainly serves as an oxidation-reduction sensitive partner and oxidative stress sensor to respond to cellular damage caused by oxidative stress . The level of PARK7 in human sperm is positively correlated with the integrity, vitality, and SOD activity of the sperm plasma membrane . PARK7 can protect cells from oxidative stress by clearing ROS through autoxidation . Although it has been reported that PARK7 can affect the viability of yak sperm after freezing and thawing, the potential changes in the localization of PARK7 during mammalian sperm freezing are discussed for the first time. The immunofluorescence analysis shows that the localization of PARK7 in HMS differs from that in LMS after sperm freezing. In HMS, it is mainly located in the posterior half of the head, while in LMS, it is located in the acrosome and posterior half of the head. This suggests that the migration of proteins during sperm freezing may affect motility. Additionally, PARK7 has been localized on both human and pig flagella. However, in this study, it was not expressed on bovine flagella, indicating species specificity in the localization of PARK7 in sperm . The sensitivity of the plasma membrane of sperm to oxidative stress is attributed to their composition of unsaturated fatty acids . In this study, metabolites linked with oxidative stress, such as L-homocitrulline, acetylcarnitine, and Isobutyryl-l-carnitine, are significantly downregulated in LMS. It has been reported that adding citrulline and carnitine to the semen freezing extender can enhance the antioxidant capacity and mitochondrial membrane potential of bovine and sheep sperm, improving semen quality . Therefore, the results of this study suggest that the decrease in levels of key antioxidant proteins and metabolites leads to oxidative stress which effect the motility after cryopreservation of sperm. Cryopreservation can stimulate autophagy of sperm mitochondria Cellular autophagy is a conservative self-degradation that occurs in various stress responses, such as oxidative stress and heat stress. Pamela Uribe et al. found that exposure to oxidative stress could activate autophagy in human sperm, thereby preventing impaired sperm motility and cell death . However, the latest studies put forward that mitochondrial autophagy is significantly activated in frozen-thawed sperm. Although autophagy can help sperm cope with mild oxidative stress caused by ROS, when severe ROS-induced damage is caused by sperm freezing, autophagy leads to programmed cell death of sperm. Given the high degree of cytoplasmic degradation of sperm, autophagy mainly occurs in mitochondria . In this study, proteins related to mitochondrial autophagy (NGF and CLU) are significantly upregulated in LMS, and LMS presents severe oxidative stress and MMP damage. Furthermore, according to a recent study, adding H 2 O 2 to human sperm to simulate oxidative stress and the occurrence of oxidative stress during frozen-thawed both cause sperm autophagy and apoptosis, resulting in decreased sperm motility . Therefore, we assumed that oxidative stress-induced mitochondrial autophagy during the freezing process of bovine sperm plays a crucial role in viability decrease. Autophagy can facilitate, conflict, or cooperate with other cell death processes, including apoptosis and necrosis, serving either a pro-survival or pro-death function . KEGG analysis exhibits that the apoptotic signalling pathway is significantly enriched in LMS. The GSEA enrichment analysis of all expressed proteins in HMS and LMS sperm reveals that the apoptotic signalling pathway is activated in LMS. Therefore, it is suspected that oxidative stress during sperm freezing can stimulate significant autophagy in sperm, leading to programmed cell death of sperm through the apoptotic signalling pathway. It agrees with previous studies on cell apoptosis induced by autophagy . Induced autophagy seems to help sperm cope with mild oxidative stress caused by ROS. However, in the case of severe ROS-induced damage during sperm freezing, autophagy may lead to programmed cell death of sperm. Cryopreservation can change ATP production in sperm Sperm movement is generated by flagellar movement with high-energy consumption after ATP hydrolysis . This study found that HMS isolated from frozen-thawed sperm had higher mitochondrial membrane potential and ATP, indicating that reduced ATP production caused by the impaired mitochondrial function of LMS is the primary factor in sperm motility diminution. Thoroughly analyzing this mechanism through molecular biology methods is particularly important for improving semen cryoinjury. The ATP required for bovine sperm to maintain vitality is produced jointly by glycolysis and oxidative phosphorylation, and the former is dominant . The proteomics analysis in this study reveals that the glycolysis signalling pathway is solely significantly enriched in HMS. The significant downregulation of glycolysis enzymes GPI, ENO3, PGAM2, FBP1, GALM, and PGK1 in LMS may be the major reason for the inhibition of ATP production in the glycolysis pathway, which corresponds to the low level of ATP in LMS. In addition, this study conducted GSEA enrichment analysis on all expressed proteins of HMS and LMS. It was found that the glycolysis signalling pathway was activated in HMS. It demonstrates that the results of bioinformatics analysis in this study are reliable and can provide solid support for subsequent biological mechanism research. PGK is the main enzyme involved in ATP production in the glycolysis pathway. It can convert 1,3-bisphosphoglycerate and ADP into 3-phosphoglycerate and ATP. It has been verified that PGK2 is a key protein affecting sperm motility and fertility . In this study, the expression level of PGK1 in HMS is evidently higher than that in LMS. Therefore, it is rational to assume that the high level of PGK1 after sperm freezing affects sperm motility by regulating ATP production through the glycolysis pathway. PGAM2 is a catalytic enzyme that promotes the conversion of 3-phosphoglycerate to 2-phosphoglycerate in the glycolysis pathway and participates in other metabolic processes by balancing the conversion between 3-phosphoglycerate and 2-phosphoglycerate . The prominent drop of PGAM2 in LMS can be considered as the inhibition and effect of ATP production by the glycolysis pathway to influence sperm motility. It has been reported that adding cholesterol-loaded cyclodextrin to the freezing extender of sperm can abate the degradation of PGAM2 in frozen-thawed sperm, minimize the impact of cryopreservation on glycolysis, and enhance sperm motility . Therefore, we speculate that the downregulation of these glycolysis enzymes effect of ATP production, which can affect sperm motility during cryopreservation. But specific mechanisms which need further research. ATP and adenosine are two derivatives of purine, playing crucial roles in maintaining cellular energy balance and nucleotide synthesis, respectively . Purine and adenosine receptors can intervene in the regulation of cAMP levels to enhance sperm function and vitality . The metabolomics results of this study demonstrate that purine metabolism and the cAMP signalling pathway are significantly enriched in HMS. Metabolites associated with them, including acetylcholine, adenosine 5’-monophosphate, adenosine monophosphate, and d-myo-inositol-1,4,5-triphosphate, are overexpressed in HMS, which is essential for cAMP to maintain a high level to ensure high sperm motility during semen frozen-thawed. A study has found that the changes in cAMP within sperm cells are consistent with ATP production. Adding exogenous cAMP to the semen-freezing extender can promote sperm motility after freezing and thawing . It indicates that the cAMP signalling pathway can prevent sperm motility diminution during semen frozen-thawed by retaining ATP concentration, which is in accordance with the higher levels of ATP in HMS sperm. In addition, the GSEA analysis of sperm proteomics in this study shows that cAMP is activated in HMS, which can activate PKA and increase tyrosine phosphorylation to maintain sperm mitochondrial function and ATP production . The phosphorylation of flagella protein through the cAMP/PKA signalling pathway can boost sperm motility . TPPP2 is a mitochondrial function-related protein specifically expressed in the reproductive organs of male animals. When TPPP2 is inhibited in human and mouse sperm, its vitality and ATP content significantly abate. Furthermore, after knockout TPPP2 in mice, sperm motility and ATP content significantly decreased, accompanied by a significant drop in sperm count and mitochondrial structure damage . In this study, TPPP2 is significantly downregulated in the LMS isolated from frozen-thawed sperm. To verify the function of TPPP2 in bovine sperm frozen-thawed, this study conducted the immunofluorescence experiment. The results present that TPPP2 is located in the flagella in HMS and the acrosome in LMS. It implies that changes in the localization of TPPP2 during semen freezing may involve impaired mitochondrial function, resulting in decreased ATP synthesis. The expression level of TPPP2 may thus be potential biomarker of sperm motility. This is the first time that TPPP2 has been localized in mature mammalian sperm, laying a valuable foundation for studying TPPP2 sperm function. Applying the 4D-label-free technology, this study identified and quantified 2,403 proteins in bovine sperm, of which 106 were upregulated in HMS and 79 were downregulated. Their differences may be a factor for sperm motility decreased. The results represent that some proteins with antioxidant capacity, such as PARK7 and PRDX6, are highly expressed in HMS. They can retain sperm vitality by clearing ROS produced during sperm freezing. Moreover, metabolic pathways related to energy metabolism, especially proteins involved in glycolysis, are highly expressed in HMS. They can generate ATP in sperm through metabolism without producing ROS, which meets the energy needs of sperm while preventing oxidative stress . Therefore, this study speculates that the loss of proteins that maintain these functions during the freezing process of bovine sperm may induce sperm motility diminution. It is worth noting that signalling pathways of spermatogenesis and structural reorganization, such as those related to chaperone complexes and protein folding, are specifically identified in bovine sperm and negatively correlated with fertilization rate . The endoplasmic reticulum plays a crucial role in the folding and assembly of newly synthesized proteins in mammalian cells. However, it is eliminated during spermatogenesis. A highly active protein synthesis and folding event occurs before spermatogenesis is completed . Therefore, this study believes that the activation of protein folding and endoplasmic reticulum pathways in LMS may reflect abnormal protein folding during sperm freezing, ultimately affecting the low viability of frozen-thawed bovine sperm. Through the untargeted metabolomics method, this study identified 4,135 metabolites in bovine sperm, of which 106 were upregulated in HMS and 223 were downregulated. These are the most identified metabolites in bovine sperm so far, laying the foundation for future research on the metabolic function of bovine sperm . DEMs analysis reveals that metabolites related to the cAMP signalling pathway and pyruvate metabolism are highly expressed in HMS, indicating that metabolites and proteins in sperm interact with metabolic pathways to generate ATP and sustain sperm energy supply. Fan et al. discovered that adding galactose during freezing semen could increase ATP levels by augmenting AKR1B1 protein expression, thereby enhancing frozen-thawed sperm motility . This indicates that combining proteomics with metabolomics to investigate the disequilibrium of metabolic pathways in bovine sperm cryoinjury is essential to adjust the sperm ATP synthesis pathway. The imbalance between the antioxidant defense system and ROS production in sperm cells during cryopreservation leads to oxidative stress . This study speculated that oxidative stress during the frozen-thawed process of bovine sperm might be the main cause of sperm motility diminution. To verify this assumption, this study detected the ROS content in HMS and LMS isolated from bovine frozen-thawed sperm. The results exhibit that the high content of ROS in LMS greatly impairs sperm motility. Meanwhile, this study conducted proteomics analysis on HMS and LMS. It is found that proteins related to oxidative stress (PRDX6 and PARK7) are significantly downregulated in LMS. The reduction in the expression of these proteins indicates that the antioxidant system has been damaged in LMS, producing more ROS. Shi et al. reported similar findings when simulating oxidative stress by supplementing exogenous H 2 O 2 to sperm and freezing semen . PARK7 mainly serves as an oxidation-reduction sensitive partner and oxidative stress sensor to respond to cellular damage caused by oxidative stress . The level of PARK7 in human sperm is positively correlated with the integrity, vitality, and SOD activity of the sperm plasma membrane . PARK7 can protect cells from oxidative stress by clearing ROS through autoxidation . Although it has been reported that PARK7 can affect the viability of yak sperm after freezing and thawing, the potential changes in the localization of PARK7 during mammalian sperm freezing are discussed for the first time. The immunofluorescence analysis shows that the localization of PARK7 in HMS differs from that in LMS after sperm freezing. In HMS, it is mainly located in the posterior half of the head, while in LMS, it is located in the acrosome and posterior half of the head. This suggests that the migration of proteins during sperm freezing may affect motility. Additionally, PARK7 has been localized on both human and pig flagella. However, in this study, it was not expressed on bovine flagella, indicating species specificity in the localization of PARK7 in sperm . The sensitivity of the plasma membrane of sperm to oxidative stress is attributed to their composition of unsaturated fatty acids . In this study, metabolites linked with oxidative stress, such as L-homocitrulline, acetylcarnitine, and Isobutyryl-l-carnitine, are significantly downregulated in LMS. It has been reported that adding citrulline and carnitine to the semen freezing extender can enhance the antioxidant capacity and mitochondrial membrane potential of bovine and sheep sperm, improving semen quality . Therefore, the results of this study suggest that the decrease in levels of key antioxidant proteins and metabolites leads to oxidative stress which effect the motility after cryopreservation of sperm. Cellular autophagy is a conservative self-degradation that occurs in various stress responses, such as oxidative stress and heat stress. Pamela Uribe et al. found that exposure to oxidative stress could activate autophagy in human sperm, thereby preventing impaired sperm motility and cell death . However, the latest studies put forward that mitochondrial autophagy is significantly activated in frozen-thawed sperm. Although autophagy can help sperm cope with mild oxidative stress caused by ROS, when severe ROS-induced damage is caused by sperm freezing, autophagy leads to programmed cell death of sperm. Given the high degree of cytoplasmic degradation of sperm, autophagy mainly occurs in mitochondria . In this study, proteins related to mitochondrial autophagy (NGF and CLU) are significantly upregulated in LMS, and LMS presents severe oxidative stress and MMP damage. Furthermore, according to a recent study, adding H 2 O 2 to human sperm to simulate oxidative stress and the occurrence of oxidative stress during frozen-thawed both cause sperm autophagy and apoptosis, resulting in decreased sperm motility . Therefore, we assumed that oxidative stress-induced mitochondrial autophagy during the freezing process of bovine sperm plays a crucial role in viability decrease. Autophagy can facilitate, conflict, or cooperate with other cell death processes, including apoptosis and necrosis, serving either a pro-survival or pro-death function . KEGG analysis exhibits that the apoptotic signalling pathway is significantly enriched in LMS. The GSEA enrichment analysis of all expressed proteins in HMS and LMS sperm reveals that the apoptotic signalling pathway is activated in LMS. Therefore, it is suspected that oxidative stress during sperm freezing can stimulate significant autophagy in sperm, leading to programmed cell death of sperm through the apoptotic signalling pathway. It agrees with previous studies on cell apoptosis induced by autophagy . Induced autophagy seems to help sperm cope with mild oxidative stress caused by ROS. However, in the case of severe ROS-induced damage during sperm freezing, autophagy may lead to programmed cell death of sperm. Sperm movement is generated by flagellar movement with high-energy consumption after ATP hydrolysis . This study found that HMS isolated from frozen-thawed sperm had higher mitochondrial membrane potential and ATP, indicating that reduced ATP production caused by the impaired mitochondrial function of LMS is the primary factor in sperm motility diminution. Thoroughly analyzing this mechanism through molecular biology methods is particularly important for improving semen cryoinjury. The ATP required for bovine sperm to maintain vitality is produced jointly by glycolysis and oxidative phosphorylation, and the former is dominant . The proteomics analysis in this study reveals that the glycolysis signalling pathway is solely significantly enriched in HMS. The significant downregulation of glycolysis enzymes GPI, ENO3, PGAM2, FBP1, GALM, and PGK1 in LMS may be the major reason for the inhibition of ATP production in the glycolysis pathway, which corresponds to the low level of ATP in LMS. In addition, this study conducted GSEA enrichment analysis on all expressed proteins of HMS and LMS. It was found that the glycolysis signalling pathway was activated in HMS. It demonstrates that the results of bioinformatics analysis in this study are reliable and can provide solid support for subsequent biological mechanism research. PGK is the main enzyme involved in ATP production in the glycolysis pathway. It can convert 1,3-bisphosphoglycerate and ADP into 3-phosphoglycerate and ATP. It has been verified that PGK2 is a key protein affecting sperm motility and fertility . In this study, the expression level of PGK1 in HMS is evidently higher than that in LMS. Therefore, it is rational to assume that the high level of PGK1 after sperm freezing affects sperm motility by regulating ATP production through the glycolysis pathway. PGAM2 is a catalytic enzyme that promotes the conversion of 3-phosphoglycerate to 2-phosphoglycerate in the glycolysis pathway and participates in other metabolic processes by balancing the conversion between 3-phosphoglycerate and 2-phosphoglycerate . The prominent drop of PGAM2 in LMS can be considered as the inhibition and effect of ATP production by the glycolysis pathway to influence sperm motility. It has been reported that adding cholesterol-loaded cyclodextrin to the freezing extender of sperm can abate the degradation of PGAM2 in frozen-thawed sperm, minimize the impact of cryopreservation on glycolysis, and enhance sperm motility . Therefore, we speculate that the downregulation of these glycolysis enzymes effect of ATP production, which can affect sperm motility during cryopreservation. But specific mechanisms which need further research. ATP and adenosine are two derivatives of purine, playing crucial roles in maintaining cellular energy balance and nucleotide synthesis, respectively . Purine and adenosine receptors can intervene in the regulation of cAMP levels to enhance sperm function and vitality . The metabolomics results of this study demonstrate that purine metabolism and the cAMP signalling pathway are significantly enriched in HMS. Metabolites associated with them, including acetylcholine, adenosine 5’-monophosphate, adenosine monophosphate, and d-myo-inositol-1,4,5-triphosphate, are overexpressed in HMS, which is essential for cAMP to maintain a high level to ensure high sperm motility during semen frozen-thawed. A study has found that the changes in cAMP within sperm cells are consistent with ATP production. Adding exogenous cAMP to the semen-freezing extender can promote sperm motility after freezing and thawing . It indicates that the cAMP signalling pathway can prevent sperm motility diminution during semen frozen-thawed by retaining ATP concentration, which is in accordance with the higher levels of ATP in HMS sperm. In addition, the GSEA analysis of sperm proteomics in this study shows that cAMP is activated in HMS, which can activate PKA and increase tyrosine phosphorylation to maintain sperm mitochondrial function and ATP production . The phosphorylation of flagella protein through the cAMP/PKA signalling pathway can boost sperm motility . TPPP2 is a mitochondrial function-related protein specifically expressed in the reproductive organs of male animals. When TPPP2 is inhibited in human and mouse sperm, its vitality and ATP content significantly abate. Furthermore, after knockout TPPP2 in mice, sperm motility and ATP content significantly decreased, accompanied by a significant drop in sperm count and mitochondrial structure damage . In this study, TPPP2 is significantly downregulated in the LMS isolated from frozen-thawed sperm. To verify the function of TPPP2 in bovine sperm frozen-thawed, this study conducted the immunofluorescence experiment. The results present that TPPP2 is located in the flagella in HMS and the acrosome in LMS. It implies that changes in the localization of TPPP2 during semen freezing may involve impaired mitochondrial function, resulting in decreased ATP synthesis. The expression level of TPPP2 may thus be potential biomarker of sperm motility. This is the first time that TPPP2 has been localized in mature mammalian sperm, laying a valuable foundation for studying TPPP2 sperm function. In summary, cryopreservation divides bovine sperm into HMS and LMS groups. On the whole, highly expressed antioxidant enzymes in HMS can sustain sperm motility by regulating the ROS produced during freezing to avoid oxidative stress and apoptosis. The glycolysis pathway in HMS ensures ATP production which can maintain sperm motility. The key proteins, metabolites and pathway identified in this research provides new insights into the molecular regulatory mechanism of sperm cryoinjury during cryopreservation and the improvement of frozen semen motility. Reagent All chemicals not specified were purchased from Sigma-Aldrich (MO, USA). Semen cryopreservation Fresh semen was obtained from four Gaoqing bulls (4–6 years) at the Aohang farm in Shandong Province, China, using an artificial vagina. To evaluate the progressive motility of sperm, 3 μL of fresh semen was immediately evaluated through a computer-assisted sperm analysis system (CASA, Nikon, Eclipse E200, Basler, acA780-75gc, SCA sperm class analyzer). Only semen samples were considered for analysis with a volume ≥ 2.0 mL and revealed ≥ 70% motility. A Biladyl ® extender from Minitube, GER was employed to mix the samples in 2 steps. In the first step, the glycerol-free solution was mixed with the samples, yielding 50% of the total volume. They were then stored for 2 h at 5 °C. Secondly, the samples were again mixed with a chilled (5 ºC) solution containing glycerol and then stored with the same condition. The sperm were then stored (cryopreserved) with a final 140 to 200 × 10 6 /mL cell density. The cooling curve was implemented as follows: -5 ºC/min from 5 to 4 ºC, -3 ºC/min from 4 to -10 ºC, -40 ºC/min from − 10 to -100 ºC, and − 20 ºC/min from − 100 to -140 ºC. All samples were stored in liquid nitrogen for long-term storage. Sperm sample preparation After thawing at 37ºC for 30 s, sperm samples were added in 1.5 mL of 45% Percoll and 1.5 mL of 90% Percoll in a 15 mL conical plastic tube. To separate HMS and LMS, sperm suspensions were spread on the upper gradient layer and spun for 10 min at 700 × g to distinguish between HMS and LMS. The abnormal morphology of spermatozoa, and seminal extender were recovered from the top layer of 45% Percoll; LMS was observed from the 45–90% Percoll interface; HMS was recovered from the bottom layer of 90% Percoll yielded HMS. Sperm quality parameters were evaluated using CASA as per the World Health Organization standards . Spermatozoon ROS detection The levels of ROS production in spermatozoa were measured via the ROS Kit and DCFH-DA ROS probes, in line with the manufacturer’s recommendation. The sperm (10 × 10 6 / mL) and DCFH-DA (10 μM) were incubated in a dark condition for 30 min at 37 ºC. DCFH-DA was transformed into fluorescent 2,7-dichlorofluorescein within the cell due to the presence of intracellular ROS. The intensity of fluorescence was quantified via flow cytometry. Spermatozoon of MMP detection Sperm mitochondrial activity was detected via the JC-1 dye under the provided protocols. Approximately, 100 μL of the sperm (10 × 10 6 /mL) were mixed with JC-1 staining solution (10 μL) and kept in a dark condition at 37 ºC for 30 min. The sample was spun at 4 ºC, 600 × g for 5 min, followed by the 1 mL addition of JC-1 staining buffer. The sample was again centrifuged at the same conditions. Afterwards, the sample was diluted properly with 200 μL of JC-1 dye. The fluorescence intensity of JC-1 was determined via flow cytometry. Spermatozoon ATP measurement The phosphomolybdic acid colorimetric method was used to evaluate ATP concentration in sperm as per the provided instructions via an ATP detection kit. Approximately lysis buffer (200 μL) was mixed with the sperm sample (10 × 10 6 mL) and kept for 30 min on ice. The suspension concentration was estimated by collecting it after 10 min of spinning at 12,000 × g and 4 ºC. The sample was boiled in the water bath for 10 min and consistently mixed in the ATP stock solution (reagent kit) as the reference solution. After 5 min of sample incubation at ambient temperature (20–25 ºC), the absorbance was measured on a 96-well plate using a multifunctional microplate reader. Based on a linear correlation between absorbance and ATP concentration, the ATP levels of the sperm were quantified. Quantitative proteomics analysis Protein extraction and trypsin treatment There were 100 million sperm in each sample. HMS and LMS samples were washed twice with PBS for 10 min at 2,000 × g and mixed in a lysis buffer containing SDS (1%), a protease inhibitor (1%), TSA (3 μM), and NAM (50 mM). These homogenates three times using were sonicated (thrice) via a high-intensity ultrasonic processor (Scientz, Ningbo, China). The samples were spun for 10 min at 12,000 × g, 4 ºC to obtain supernatant. Protein was quantified in the supernatant via a BCA reagent (Beyotime Biotechnology, Wuhan, China) following the manual’s recommendations. Enzymolysis was carried out for each protein sample (100 μg), followed by mixing it with the same volume of lysis solution. The sample was then mixed with one volume of chilled acetone, vigorously mixed, and then diluted with four volumes of chilled acetone. Afterwards, it precipitated for 2 h at -20 °C, then dissolved again in TEAB (200 mM), and distributed using ultrasonication. First, the samples of protein were digested by incubating them with trypsin at 1:50 (protein and trypsin) for 24 h. The sample was treated with DTT to obtain a 5 mM concentration and reduced to 56 ºC for 30 min. It was adjusted to 11 mM by adding IAA, and the sample was then kept in the dark at 20–25 ºC for15 min. LC-MS/MS analyses All peptides were added and dissolved in a solution containing formic acid (0.1%) and acetonitrile (2%), and then promptly loaded onto a reverse-phase analytical column. The solvent B (formic acid (0.1%) in acetonitrile) concentration was progressively increased from 9 to 24% > 5 min, from 24 to 35% > 3 min, and from 35 to 80% > 4 min to perform gradient elution. The samples were immersed in solvent B (80%) for 4 min with a flow rate of 450 nL/min via an EASY-nLC 1200 UPLC system (Thermo Scientific, T CA, USA). Afterwards, the peptides were examined via the Capillary source on the timsTOF Pro mass spectrometer (Thermo Scientific) that was plugged online into the UPLC system, followed by MS/MS. The voltage of the ion source was adjusted to 1.75 kV. The peptide precursor ions and secondary fragments were discovered and examined using TOF. Data-independent parallel accumulation serial fragmentation (dia-PASEF) mode was implemented for data collection. The initial range for scanning in mass spectrometry (MS) was fixed at 100–1700 m/z. PASEF mode collections were carried out ten times after the acquisition of a single primary MS image. In the secondary MS scanning, the window was every 25 m/z, with a range of 400–1200. Database searches In this study, the obtained MS/MS spectra were compared to those in the UniProt database (Bos taurus, sequences: 37,508) via concatenated reverse-decoy database searching with the Maxquant search engine (v.1.8). The cleavage enzyme was designated as trypsin/P, and a maximum of one cleavage was accepted. The fixed change was carbamidomethyl on Cys, while the variable changes were Met oxidation, deamidation (NQ), N-terminal acetylation, and phosphorylation on Thr, Ser, and Tyr. The value of FDR was adjusted at ≤ 1%. Statistical analysis Data was statistically evaluated via the SPSS Statistics software (IBM, NY, USA, version 28.0). The correlation between motility parameters and ROS, MMP, ATP were assessed by Pearson correlation (data normally distributed). A fold-change cut-off of 1.5 and a p ≤ 0.05 were used to recognize differentially expressed proteins (DEPs). Sperm treatments were compared using a t-test. Data were illustrated as the mean ± SEM. Significant and highly substantial variances were depicted by p -values of ≤ 0.05 and 0.01, respectively. Bioinformatics analysis Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) functional enrichment analyses of the DEPs recognized between HMS and LMS spermatozoa and conducted via the KOBAS ( http://kobas.cbi.pku.edu.cn/annotate/ ) and g: Profiler ( https://biit.cs.ut.ee/gprofiler/gost ) tools. Significantly enriched pathways were identified as those with a corrected p ≤ 0.05 The gene set enrichment analysis (GSEA v 4.1.0) evaluated the protein enrichment in samples within previously recognized pathways . Significantly enriched pathways were identified based on an FDR-adjusted q ≤ 0.25 and NOM p ≤ 0.05. The STRING (v11.5) database ( https://string-db.org/ ) was used to conduct a screening of established protein-protein interactions (PPIs) which was created via Cytoscape software (v 3.9.1), excluding networks with ≥ three nodes . LC/MS-based metabolomics analysis Metabolite extraction The sperm sample (10 × 10 6 ) were washed twice with PBS for 10 min at 2,000 × g and vortexed with 1 mL of pre-cooled acetonitrile: carbonitrile: aqueous suspension (2:2:1, v/v) for 30 s. The solution was kept at a low- temperature for ultrasound exposure for 30 min and stewed at -20 ºC for 10 min. This mixture was spun at 14,000 g and 4 ºC for 20 min and the supernatant containing metabolites was vacuum desiccated. An acetonitrile aqueous suspension (100 μL) (H 2 O: acetonitrile, = 1:1, v/v) was added to the sample, vortexed, and spun at 14,000 g and 4 ºC for 15 min to conduct MS analysis. The supernatant was preserved for additional analysis. The QC sample was prepared by extracting 10 μL from each sample to assess the LC-MS system’s accuracy and reproducibility. LC-MS/MS analysis of metabolite samples Metabolites were separated via an advanced UPLC (Agilent 1290 Infinity LC) with a HILIC column. The mobile phase A consists of C 2 H 7 NO 2 (25 mM) and NH 4 OH (25 mM). The flow rate was adjusted at 0.5 mL/min with 2 μL of injection volume. The mobile phase B was NH 4 OH. In the UPLIC gradient, B was adjusted at 40% for 0–9 min, and it varied linearly from 40 to 95% for 9–9.1 min. From 9.1 to 12 min, B was sustained at 95%. The sample was adjusted at 4 ºC in an autosampler during the whole analysis. Continuous analysis of samples was conducted randomly to remove the effect of signal fluctuations on detection. The system stability and the accuracy of experimental data were monitored and evaluated by incorporating the QC samples into the sample queue. The samples’ primary and secondary spectra were acquired in real-time via the AB Triple TOF 6600 mass spectrometer, which runs in a dual (+ ve/-ve) ESI turning mode. The total scanning time was 0.20 s/spectrum, and the m/z ratio measurement range in primary MS was 60-1000 Da. The m/z ratio in secondary MS was between 25 and 1000 Da, with an average detection time of 0.05/spectra. In secondary MS, the m/z ratio varied between 25 and 1000 Da, with an overall scanning period of 0.05/spectra. The secondary MS was carried out using information-dependent acquisition (IDA). The screening was conducted in the peak intensity mode. The declustering potential was defined as ± 60 V, and 35 ± 15 eV was the collision energy. The IDA configuration included a dynamic exclusion range of 4 Da for isotopic ions, and each scan acquired 10 fragment spectra. Metabolomics data analysis and path enrichment Proteo Wizard was employed to change the raw data into mz XML format. The retention time correction, peak alignment, and area extraction were implemented using XCMS software. The metabolite integrity of the data derived from XCMS was verified, and metabolites with missing values ≥ 50% were excluded. To ensure parallelism in comparing metabolites and samples, the total peak area of the data was normalized. The OPLS-DA model determined the variable importance in projection (VIP). The paired Student’s t-test estimated the p -value in a single-dimensional statistical analysis. Metabolites that revealed a p ≤ 0.05, FC ≥ 1.2 (or FC ≤ 0.83), and VIP ≥ 1.0 were classified as highly differential expressed metabolites (DEMs) . The biological pathways of DEMs were examined via the KEGG pathway database. Western blotting analysis Bovine sperm proteins were loaded on 12% SDS-PAGE via electrophoresis after denaturation of the 5X SDS loading buffer. These proteins were transferred to the PVDF membrane which was blocked with skim milk (5%) at 22–25 ºC for 2.5 h. After this, they were added Rabbit polyclonal anti-α-tubulin (1:2000; Proteintech11224-1-AP, China), rabbit polyclonal anti-PARK7 (1:1000, abcom18257, UK), and rabbit polyclonal anti-TPPP2 (1:1000; abcom236887, UK) and incubated overnight at 4ºC. After TBST stripping, ). Goat polyclonal Rabbit IgG secondary antibody (1:2000, Bioss-2405R, China) was added also after TBST stripping. The protein bands were captured by the CCD camera system (Tanon, Shanghai, China) and visualized using the ECL chemiluminescence assay reagent. ImageJ (National Institutes of Health, NIH) was employed to analyze all images. The results of two target proteins were standardized using β-actin as the internal reference. Immunofluorescence The sperm sample was preserved in paraformaldehyde (4% PFA) for 15 min and approximately 10 uL sample was dropped onto a coated glass slide. The slide was then kept in an oven at 37 ºC for 30 min. The sample was permeablized with 0.5% Triton-100 for 10 min, followed by blocking with horse serum (10%) for 1 h. Primary antibodies (1:400) PARK7 and TPPP2 were then added, and the sample was kept at 4 ºC overnight in a moist slide box. Next, a secondary antibody: goat anti-rabbit Alexa Fluor ® 488 IgG H&L (1:500, abcom150077, UK) was added to the slide, and incubated for 1 h at 20–25 ºC in a dark condition. All nuclei of the spermatozoa were stained via DAPI (10 μg/mL) (Solarbio-C0065). After the addition of an anti-fluorescence quencher and mounting the slide, the smear was examined via an ultra-high resolution fluorescence microscope (ZEISS, Scope. A1). All chemicals not specified were purchased from Sigma-Aldrich (MO, USA). Fresh semen was obtained from four Gaoqing bulls (4–6 years) at the Aohang farm in Shandong Province, China, using an artificial vagina. To evaluate the progressive motility of sperm, 3 μL of fresh semen was immediately evaluated through a computer-assisted sperm analysis system (CASA, Nikon, Eclipse E200, Basler, acA780-75gc, SCA sperm class analyzer). Only semen samples were considered for analysis with a volume ≥ 2.0 mL and revealed ≥ 70% motility. A Biladyl ® extender from Minitube, GER was employed to mix the samples in 2 steps. In the first step, the glycerol-free solution was mixed with the samples, yielding 50% of the total volume. They were then stored for 2 h at 5 °C. Secondly, the samples were again mixed with a chilled (5 ºC) solution containing glycerol and then stored with the same condition. The sperm were then stored (cryopreserved) with a final 140 to 200 × 10 6 /mL cell density. The cooling curve was implemented as follows: -5 ºC/min from 5 to 4 ºC, -3 ºC/min from 4 to -10 ºC, -40 ºC/min from − 10 to -100 ºC, and − 20 ºC/min from − 100 to -140 ºC. All samples were stored in liquid nitrogen for long-term storage. After thawing at 37ºC for 30 s, sperm samples were added in 1.5 mL of 45% Percoll and 1.5 mL of 90% Percoll in a 15 mL conical plastic tube. To separate HMS and LMS, sperm suspensions were spread on the upper gradient layer and spun for 10 min at 700 × g to distinguish between HMS and LMS. The abnormal morphology of spermatozoa, and seminal extender were recovered from the top layer of 45% Percoll; LMS was observed from the 45–90% Percoll interface; HMS was recovered from the bottom layer of 90% Percoll yielded HMS. Sperm quality parameters were evaluated using CASA as per the World Health Organization standards . The levels of ROS production in spermatozoa were measured via the ROS Kit and DCFH-DA ROS probes, in line with the manufacturer’s recommendation. The sperm (10 × 10 6 / mL) and DCFH-DA (10 μM) were incubated in a dark condition for 30 min at 37 ºC. DCFH-DA was transformed into fluorescent 2,7-dichlorofluorescein within the cell due to the presence of intracellular ROS. The intensity of fluorescence was quantified via flow cytometry. Sperm mitochondrial activity was detected via the JC-1 dye under the provided protocols. Approximately, 100 μL of the sperm (10 × 10 6 /mL) were mixed with JC-1 staining solution (10 μL) and kept in a dark condition at 37 ºC for 30 min. The sample was spun at 4 ºC, 600 × g for 5 min, followed by the 1 mL addition of JC-1 staining buffer. The sample was again centrifuged at the same conditions. Afterwards, the sample was diluted properly with 200 μL of JC-1 dye. The fluorescence intensity of JC-1 was determined via flow cytometry. The phosphomolybdic acid colorimetric method was used to evaluate ATP concentration in sperm as per the provided instructions via an ATP detection kit. Approximately lysis buffer (200 μL) was mixed with the sperm sample (10 × 10 6 mL) and kept for 30 min on ice. The suspension concentration was estimated by collecting it after 10 min of spinning at 12,000 × g and 4 ºC. The sample was boiled in the water bath for 10 min and consistently mixed in the ATP stock solution (reagent kit) as the reference solution. After 5 min of sample incubation at ambient temperature (20–25 ºC), the absorbance was measured on a 96-well plate using a multifunctional microplate reader. Based on a linear correlation between absorbance and ATP concentration, the ATP levels of the sperm were quantified. Protein extraction and trypsin treatment There were 100 million sperm in each sample. HMS and LMS samples were washed twice with PBS for 10 min at 2,000 × g and mixed in a lysis buffer containing SDS (1%), a protease inhibitor (1%), TSA (3 μM), and NAM (50 mM). These homogenates three times using were sonicated (thrice) via a high-intensity ultrasonic processor (Scientz, Ningbo, China). The samples were spun for 10 min at 12,000 × g, 4 ºC to obtain supernatant. Protein was quantified in the supernatant via a BCA reagent (Beyotime Biotechnology, Wuhan, China) following the manual’s recommendations. Enzymolysis was carried out for each protein sample (100 μg), followed by mixing it with the same volume of lysis solution. The sample was then mixed with one volume of chilled acetone, vigorously mixed, and then diluted with four volumes of chilled acetone. Afterwards, it precipitated for 2 h at -20 °C, then dissolved again in TEAB (200 mM), and distributed using ultrasonication. First, the samples of protein were digested by incubating them with trypsin at 1:50 (protein and trypsin) for 24 h. The sample was treated with DTT to obtain a 5 mM concentration and reduced to 56 ºC for 30 min. It was adjusted to 11 mM by adding IAA, and the sample was then kept in the dark at 20–25 ºC for15 min. LC-MS/MS analyses All peptides were added and dissolved in a solution containing formic acid (0.1%) and acetonitrile (2%), and then promptly loaded onto a reverse-phase analytical column. The solvent B (formic acid (0.1%) in acetonitrile) concentration was progressively increased from 9 to 24% > 5 min, from 24 to 35% > 3 min, and from 35 to 80% > 4 min to perform gradient elution. The samples were immersed in solvent B (80%) for 4 min with a flow rate of 450 nL/min via an EASY-nLC 1200 UPLC system (Thermo Scientific, T CA, USA). Afterwards, the peptides were examined via the Capillary source on the timsTOF Pro mass spectrometer (Thermo Scientific) that was plugged online into the UPLC system, followed by MS/MS. The voltage of the ion source was adjusted to 1.75 kV. The peptide precursor ions and secondary fragments were discovered and examined using TOF. Data-independent parallel accumulation serial fragmentation (dia-PASEF) mode was implemented for data collection. The initial range for scanning in mass spectrometry (MS) was fixed at 100–1700 m/z. PASEF mode collections were carried out ten times after the acquisition of a single primary MS image. In the secondary MS scanning, the window was every 25 m/z, with a range of 400–1200. Database searches In this study, the obtained MS/MS spectra were compared to those in the UniProt database (Bos taurus, sequences: 37,508) via concatenated reverse-decoy database searching with the Maxquant search engine (v.1.8). The cleavage enzyme was designated as trypsin/P, and a maximum of one cleavage was accepted. The fixed change was carbamidomethyl on Cys, while the variable changes were Met oxidation, deamidation (NQ), N-terminal acetylation, and phosphorylation on Thr, Ser, and Tyr. The value of FDR was adjusted at ≤ 1%. Statistical analysis Data was statistically evaluated via the SPSS Statistics software (IBM, NY, USA, version 28.0). The correlation between motility parameters and ROS, MMP, ATP were assessed by Pearson correlation (data normally distributed). A fold-change cut-off of 1.5 and a p ≤ 0.05 were used to recognize differentially expressed proteins (DEPs). Sperm treatments were compared using a t-test. Data were illustrated as the mean ± SEM. Significant and highly substantial variances were depicted by p -values of ≤ 0.05 and 0.01, respectively. Bioinformatics analysis Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) functional enrichment analyses of the DEPs recognized between HMS and LMS spermatozoa and conducted via the KOBAS ( http://kobas.cbi.pku.edu.cn/annotate/ ) and g: Profiler ( https://biit.cs.ut.ee/gprofiler/gost ) tools. Significantly enriched pathways were identified as those with a corrected p ≤ 0.05 The gene set enrichment analysis (GSEA v 4.1.0) evaluated the protein enrichment in samples within previously recognized pathways . Significantly enriched pathways were identified based on an FDR-adjusted q ≤ 0.25 and NOM p ≤ 0.05. The STRING (v11.5) database ( https://string-db.org/ ) was used to conduct a screening of established protein-protein interactions (PPIs) which was created via Cytoscape software (v 3.9.1), excluding networks with ≥ three nodes . There were 100 million sperm in each sample. HMS and LMS samples were washed twice with PBS for 10 min at 2,000 × g and mixed in a lysis buffer containing SDS (1%), a protease inhibitor (1%), TSA (3 μM), and NAM (50 mM). These homogenates three times using were sonicated (thrice) via a high-intensity ultrasonic processor (Scientz, Ningbo, China). The samples were spun for 10 min at 12,000 × g, 4 ºC to obtain supernatant. Protein was quantified in the supernatant via a BCA reagent (Beyotime Biotechnology, Wuhan, China) following the manual’s recommendations. Enzymolysis was carried out for each protein sample (100 μg), followed by mixing it with the same volume of lysis solution. The sample was then mixed with one volume of chilled acetone, vigorously mixed, and then diluted with four volumes of chilled acetone. Afterwards, it precipitated for 2 h at -20 °C, then dissolved again in TEAB (200 mM), and distributed using ultrasonication. First, the samples of protein were digested by incubating them with trypsin at 1:50 (protein and trypsin) for 24 h. The sample was treated with DTT to obtain a 5 mM concentration and reduced to 56 ºC for 30 min. It was adjusted to 11 mM by adding IAA, and the sample was then kept in the dark at 20–25 ºC for15 min. All peptides were added and dissolved in a solution containing formic acid (0.1%) and acetonitrile (2%), and then promptly loaded onto a reverse-phase analytical column. The solvent B (formic acid (0.1%) in acetonitrile) concentration was progressively increased from 9 to 24% > 5 min, from 24 to 35% > 3 min, and from 35 to 80% > 4 min to perform gradient elution. The samples were immersed in solvent B (80%) for 4 min with a flow rate of 450 nL/min via an EASY-nLC 1200 UPLC system (Thermo Scientific, T CA, USA). Afterwards, the peptides were examined via the Capillary source on the timsTOF Pro mass spectrometer (Thermo Scientific) that was plugged online into the UPLC system, followed by MS/MS. The voltage of the ion source was adjusted to 1.75 kV. The peptide precursor ions and secondary fragments were discovered and examined using TOF. Data-independent parallel accumulation serial fragmentation (dia-PASEF) mode was implemented for data collection. The initial range for scanning in mass spectrometry (MS) was fixed at 100–1700 m/z. PASEF mode collections were carried out ten times after the acquisition of a single primary MS image. In the secondary MS scanning, the window was every 25 m/z, with a range of 400–1200. In this study, the obtained MS/MS spectra were compared to those in the UniProt database (Bos taurus, sequences: 37,508) via concatenated reverse-decoy database searching with the Maxquant search engine (v.1.8). The cleavage enzyme was designated as trypsin/P, and a maximum of one cleavage was accepted. The fixed change was carbamidomethyl on Cys, while the variable changes were Met oxidation, deamidation (NQ), N-terminal acetylation, and phosphorylation on Thr, Ser, and Tyr. The value of FDR was adjusted at ≤ 1%. Data was statistically evaluated via the SPSS Statistics software (IBM, NY, USA, version 28.0). The correlation between motility parameters and ROS, MMP, ATP were assessed by Pearson correlation (data normally distributed). A fold-change cut-off of 1.5 and a p ≤ 0.05 were used to recognize differentially expressed proteins (DEPs). Sperm treatments were compared using a t-test. Data were illustrated as the mean ± SEM. Significant and highly substantial variances were depicted by p -values of ≤ 0.05 and 0.01, respectively. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) functional enrichment analyses of the DEPs recognized between HMS and LMS spermatozoa and conducted via the KOBAS ( http://kobas.cbi.pku.edu.cn/annotate/ ) and g: Profiler ( https://biit.cs.ut.ee/gprofiler/gost ) tools. Significantly enriched pathways were identified as those with a corrected p ≤ 0.05 The gene set enrichment analysis (GSEA v 4.1.0) evaluated the protein enrichment in samples within previously recognized pathways . Significantly enriched pathways were identified based on an FDR-adjusted q ≤ 0.25 and NOM p ≤ 0.05. The STRING (v11.5) database ( https://string-db.org/ ) was used to conduct a screening of established protein-protein interactions (PPIs) which was created via Cytoscape software (v 3.9.1), excluding networks with ≥ three nodes . Metabolite extraction The sperm sample (10 × 10 6 ) were washed twice with PBS for 10 min at 2,000 × g and vortexed with 1 mL of pre-cooled acetonitrile: carbonitrile: aqueous suspension (2:2:1, v/v) for 30 s. The solution was kept at a low- temperature for ultrasound exposure for 30 min and stewed at -20 ºC for 10 min. This mixture was spun at 14,000 g and 4 ºC for 20 min and the supernatant containing metabolites was vacuum desiccated. An acetonitrile aqueous suspension (100 μL) (H 2 O: acetonitrile, = 1:1, v/v) was added to the sample, vortexed, and spun at 14,000 g and 4 ºC for 15 min to conduct MS analysis. The supernatant was preserved for additional analysis. The QC sample was prepared by extracting 10 μL from each sample to assess the LC-MS system’s accuracy and reproducibility. LC-MS/MS analysis of metabolite samples Metabolites were separated via an advanced UPLC (Agilent 1290 Infinity LC) with a HILIC column. The mobile phase A consists of C 2 H 7 NO 2 (25 mM) and NH 4 OH (25 mM). The flow rate was adjusted at 0.5 mL/min with 2 μL of injection volume. The mobile phase B was NH 4 OH. In the UPLIC gradient, B was adjusted at 40% for 0–9 min, and it varied linearly from 40 to 95% for 9–9.1 min. From 9.1 to 12 min, B was sustained at 95%. The sample was adjusted at 4 ºC in an autosampler during the whole analysis. Continuous analysis of samples was conducted randomly to remove the effect of signal fluctuations on detection. The system stability and the accuracy of experimental data were monitored and evaluated by incorporating the QC samples into the sample queue. The samples’ primary and secondary spectra were acquired in real-time via the AB Triple TOF 6600 mass spectrometer, which runs in a dual (+ ve/-ve) ESI turning mode. The total scanning time was 0.20 s/spectrum, and the m/z ratio measurement range in primary MS was 60-1000 Da. The m/z ratio in secondary MS was between 25 and 1000 Da, with an average detection time of 0.05/spectra. In secondary MS, the m/z ratio varied between 25 and 1000 Da, with an overall scanning period of 0.05/spectra. The secondary MS was carried out using information-dependent acquisition (IDA). The screening was conducted in the peak intensity mode. The declustering potential was defined as ± 60 V, and 35 ± 15 eV was the collision energy. The IDA configuration included a dynamic exclusion range of 4 Da for isotopic ions, and each scan acquired 10 fragment spectra. Metabolomics data analysis and path enrichment Proteo Wizard was employed to change the raw data into mz XML format. The retention time correction, peak alignment, and area extraction were implemented using XCMS software. The metabolite integrity of the data derived from XCMS was verified, and metabolites with missing values ≥ 50% were excluded. To ensure parallelism in comparing metabolites and samples, the total peak area of the data was normalized. The OPLS-DA model determined the variable importance in projection (VIP). The paired Student’s t-test estimated the p -value in a single-dimensional statistical analysis. Metabolites that revealed a p ≤ 0.05, FC ≥ 1.2 (or FC ≤ 0.83), and VIP ≥ 1.0 were classified as highly differential expressed metabolites (DEMs) . The biological pathways of DEMs were examined via the KEGG pathway database. Western blotting analysis Bovine sperm proteins were loaded on 12% SDS-PAGE via electrophoresis after denaturation of the 5X SDS loading buffer. These proteins were transferred to the PVDF membrane which was blocked with skim milk (5%) at 22–25 ºC for 2.5 h. After this, they were added Rabbit polyclonal anti-α-tubulin (1:2000; Proteintech11224-1-AP, China), rabbit polyclonal anti-PARK7 (1:1000, abcom18257, UK), and rabbit polyclonal anti-TPPP2 (1:1000; abcom236887, UK) and incubated overnight at 4ºC. After TBST stripping, ). Goat polyclonal Rabbit IgG secondary antibody (1:2000, Bioss-2405R, China) was added also after TBST stripping. The protein bands were captured by the CCD camera system (Tanon, Shanghai, China) and visualized using the ECL chemiluminescence assay reagent. ImageJ (National Institutes of Health, NIH) was employed to analyze all images. The results of two target proteins were standardized using β-actin as the internal reference. The sperm sample (10 × 10 6 ) were washed twice with PBS for 10 min at 2,000 × g and vortexed with 1 mL of pre-cooled acetonitrile: carbonitrile: aqueous suspension (2:2:1, v/v) for 30 s. The solution was kept at a low- temperature for ultrasound exposure for 30 min and stewed at -20 ºC for 10 min. This mixture was spun at 14,000 g and 4 ºC for 20 min and the supernatant containing metabolites was vacuum desiccated. An acetonitrile aqueous suspension (100 μL) (H 2 O: acetonitrile, = 1:1, v/v) was added to the sample, vortexed, and spun at 14,000 g and 4 ºC for 15 min to conduct MS analysis. The supernatant was preserved for additional analysis. The QC sample was prepared by extracting 10 μL from each sample to assess the LC-MS system’s accuracy and reproducibility. Metabolites were separated via an advanced UPLC (Agilent 1290 Infinity LC) with a HILIC column. The mobile phase A consists of C 2 H 7 NO 2 (25 mM) and NH 4 OH (25 mM). The flow rate was adjusted at 0.5 mL/min with 2 μL of injection volume. The mobile phase B was NH 4 OH. In the UPLIC gradient, B was adjusted at 40% for 0–9 min, and it varied linearly from 40 to 95% for 9–9.1 min. From 9.1 to 12 min, B was sustained at 95%. The sample was adjusted at 4 ºC in an autosampler during the whole analysis. Continuous analysis of samples was conducted randomly to remove the effect of signal fluctuations on detection. The system stability and the accuracy of experimental data were monitored and evaluated by incorporating the QC samples into the sample queue. The samples’ primary and secondary spectra were acquired in real-time via the AB Triple TOF 6600 mass spectrometer, which runs in a dual (+ ve/-ve) ESI turning mode. The total scanning time was 0.20 s/spectrum, and the m/z ratio measurement range in primary MS was 60-1000 Da. The m/z ratio in secondary MS was between 25 and 1000 Da, with an average detection time of 0.05/spectra. In secondary MS, the m/z ratio varied between 25 and 1000 Da, with an overall scanning period of 0.05/spectra. The secondary MS was carried out using information-dependent acquisition (IDA). The screening was conducted in the peak intensity mode. The declustering potential was defined as ± 60 V, and 35 ± 15 eV was the collision energy. The IDA configuration included a dynamic exclusion range of 4 Da for isotopic ions, and each scan acquired 10 fragment spectra. Proteo Wizard was employed to change the raw data into mz XML format. The retention time correction, peak alignment, and area extraction were implemented using XCMS software. The metabolite integrity of the data derived from XCMS was verified, and metabolites with missing values ≥ 50% were excluded. To ensure parallelism in comparing metabolites and samples, the total peak area of the data was normalized. The OPLS-DA model determined the variable importance in projection (VIP). The paired Student’s t-test estimated the p -value in a single-dimensional statistical analysis. Metabolites that revealed a p ≤ 0.05, FC ≥ 1.2 (or FC ≤ 0.83), and VIP ≥ 1.0 were classified as highly differential expressed metabolites (DEMs) . The biological pathways of DEMs were examined via the KEGG pathway database. Bovine sperm proteins were loaded on 12% SDS-PAGE via electrophoresis after denaturation of the 5X SDS loading buffer. These proteins were transferred to the PVDF membrane which was blocked with skim milk (5%) at 22–25 ºC for 2.5 h. After this, they were added Rabbit polyclonal anti-α-tubulin (1:2000; Proteintech11224-1-AP, China), rabbit polyclonal anti-PARK7 (1:1000, abcom18257, UK), and rabbit polyclonal anti-TPPP2 (1:1000; abcom236887, UK) and incubated overnight at 4ºC. After TBST stripping, ). Goat polyclonal Rabbit IgG secondary antibody (1:2000, Bioss-2405R, China) was added also after TBST stripping. The protein bands were captured by the CCD camera system (Tanon, Shanghai, China) and visualized using the ECL chemiluminescence assay reagent. ImageJ (National Institutes of Health, NIH) was employed to analyze all images. The results of two target proteins were standardized using β-actin as the internal reference. The sperm sample was preserved in paraformaldehyde (4% PFA) for 15 min and approximately 10 uL sample was dropped onto a coated glass slide. The slide was then kept in an oven at 37 ºC for 30 min. The sample was permeablized with 0.5% Triton-100 for 10 min, followed by blocking with horse serum (10%) for 1 h. Primary antibodies (1:400) PARK7 and TPPP2 were then added, and the sample was kept at 4 ºC overnight in a moist slide box. Next, a secondary antibody: goat anti-rabbit Alexa Fluor ® 488 IgG H&L (1:500, abcom150077, UK) was added to the slide, and incubated for 1 h at 20–25 ºC in a dark condition. All nuclei of the spermatozoa were stained via DAPI (10 μg/mL) (Solarbio-C0065). After the addition of an anti-fluorescence quencher and mounting the slide, the smear was examined via an ultra-high resolution fluorescence microscope (ZEISS, Scope. A1). Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
ENO2-Regulated Glycolysis in Endothelial Cells Contributes to FGF2-Induced Retinal Neovascularization
e3519b64-80ef-4c4c-b484-e80545d8170a
11761142
Biochemistry[mh]
Animal Models and Treatment Wild-type C57BL/6J mice were obtained from and housed in the animal center of North Sichuan Medical College. All animal procedures adhered to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and received approval from the ethics committee of North Sichuan Medical College. The model of OIR was established as previously described. In summary, neonatal mice were exposed to hyperoxia (75% ± 2% oxygen) from postnatal day 7 (P7) to P12, after which they were returned to normal room air (21% oxygen) until P17. Control mice were maintained in the room air. Both male and female pups were included in the study. AP-III-a4 (MedChemExpress, Monmouth Junction, NJ, USA; HY-15858) was freshly prepared in a solution of 0.1% DMSO (Solarbio, Beijing, China; D8371, diluted with saline). For AP-III-a4 treatment, mouse pups were randomly divided into two groups at P12. One group of pups were treated with AP-III-a4 (intraperitoneal injection, 5 mg/kg), whereas the other group were intraperitoneally treated with same amount of saline (with 0.1% DMSO) once daily from P12 to P16 for sample collection at P17. Cell Lines and Cultures Human retinal microendothelial cells (HRMECs) were purchased from Cell Systems (ACBRI 181, Seattle, WA, USA) and grown in EGM-2 Endothelial Cell Growth Medium-2 BulletKit (C3162, Lonza, Basel, Switzerland). Recombinant human FGF2 were purchased from R&D systems (Minneapolis, MN, USA; 233-FB). AP-III-a4 was dissolved in 0.1% DMSO and added at a concentration of 10 µM. The relevant vehicle medium (with 0.1% DMSO) was used as control. The cells were maintained under suitable conditions to ensure their growth and functionality, adhering to established cell culture protocols. ATP Assay ATP levels were measured using an ATP assay kit (Abcam, ab83355) following the guidelines provided by the manufacturer. Briefly, 1 × 10 6 cells for each sample were harvested using ATP assay buffer. After pipetting up and down, collect supernatant after centrifuging 5 minutes at 4°C at 13,000× g . We added 50 µL of Reaction Mix into each standard and sample wells and added 50 µL of Background Reaction Mix into the background control sample wells. After incubating at room temperature for 30 min, we measured the output at 570 nm using a Varioskan LUX Microplate reader (Thermo Fisher Scientific, Waltham, MA, USA). Glycolysis Assay The glycolytic effect was calculated through extracellular acidification rate using a glycolysis kit (Abcam, Cambridge, UK; ab197244) in accordance with the manufacturer's instructions. In brief, HRMECs were plated in 96-well plates at a density of 4 × 10 4 cells per well. After a 24-hour treatment under various conditions, the cells were washed twice with respiration buffer. Then, 150 µL of respiration buffer was added to all wells containing cells, as well as to blank control wells. Each sample well received 10 µL of reconstituted Glycolysis Assay Reagent, and 10 µL of respiration buffer was added to the blank control wells. Then, 2 µL test compound were added to the wells. Then the signals were detected at 380 nm excitation and 615 nm emission using a Varioskan LUX Microplate reader at 10 min intervals for 120 minutes. Glycolysis/OXPHOS Assay The glycolysis/OXPHOS rate in endothelial cells were detected using the Glycolysis/OXPHOS Assay Kit (Dojindo, Tabaru, Japan; G270) according to the manufacturer's instructions. We plated 4 × 10 3 HRMECs in 96-well plates and stimulated with 50 ng/mL FGF2 for 24 hours. We added 2-DG and Oligomycin (included in the kit) at a concentration of 25 mmol/L and 1.25 µmol/L, respectively. We took out 20 µL of the culture medium and incubated it with 80 µL lactate working solution for lactate assays. The remaining cells and medium were incubated with 100 µL ATP working solution at 25°C for 10 minutes and then the relative light unit was detected using the Thermo Fisher Scientific Varioskan LUX Microplate reader. Immunofluorescence Eyeballs or cells were fixed in 4% paraformaldehyde at room temperature for 2 hours or 20 minutes, respectively. After fixation, the samples were carefully dissected and permeabilized with 0.5% Triton X-100, blocked with 3% BSA at room temperature for 1 hour, and then incubated with primary antibodies overnight at 4°C. Afterward, the samples were incubated with secondary antibodies at 37°C for 1 hour. The following primary antibodies were used: anti-CD31 (Abcam, ab9498, diluted 1:1000); anti-VWF (HUABIO, Woburn, MA, USA; HA722833); anti-GFAP (HUABIO, ET1601-23, 1:200); anti-NG2 (Abcam, ab279348, 1:100); anti-NeuN (Abcam, ab104224, 1:200); and anti-PDGFR-beta (HUABIO, ET1611-29, 1:100). The following secondary antibodies were used: Cy3-labelled goat anti-mouse IgG (H+L) (Beyotime, Jiangsu, China; A0521) and Alexa Fluor 488-labelled goat anti-rabbit IgG (H+L) (Beyotime, A0423). Images were acquired using a confocal microscope (Leica, Wetzlar, Germany) and analyzed using ImageJ software. Quantitative RT-PCR Total RNA was extracted from retina tissues and HRMEC cells using TRIzol reagent (Invitrogen, San Diego, CA, USA). The qRT-PCR assays were conducted using RT Master Mix for qPCR (MedChemExpress) and SYBR Green qPCR Master Mix (MedChemExpress) on ABI 7500 Prism system (Applied Biosystems) following the manufacturer's instructions. Results were normalized to the expression levels of β-actin. The primers for the genes analyzed are listed in . Western Blotting Cell and retina lysates were prepared using RIPA buffer containing 1% PMSF. Protein concentration was determined using a BCA assay kit (Beyotime, P0012). For each sample, 20 µg of protein lysate was loaded and separated by SDS-PAGE, followed by transfer to 0.45 µm PVDF membranes. The membranes were blocked with 5% nonfat milk at room temperature for 2 hours and then incubated overnight at 4°C with primary antibodies. The following day, membranes were treated with HRP-conjugated secondary antibodies (Affinity Biosciences, Cincinnati, OH, USA; S0001) at room temperature for 1 hour. Signal detection was carried out using a Chemidoc imaging system (Bio-Rad, Hercules, CA, USA). The primary antibodies used in this analysis included: anti-FGF-2 antibody (Proteintech, Rosemont, IL, USA; 11234-1-AP, 1:1000), anti-ENO2 antibody (Proteintech, 10149-1-AP, 1:1000), and anti-β-actin antibody (Affinity Biosciences, AF7018, 1:5000). Cell Transfection Short hairpin RNAs (shRNAs) for targeting ENO2 and negative controls were purchased from Shanghai Taitool Bioscience Co., Ltd. (Shanghai, China) HRMECs were transfected with ENO2-shRNA and Vehicle-shRNA at MOI of 30 for 8 hours following the manufacturer's protocols. Three days after transfection, the cells were selected by puromycin (2 µg/mL) for 3 days and then the cells were used for further experiments. Tube Formation Assay For the tube formation assay, 96-well plates were precoated with 50 µL of Matrigel and allowed to gel for 40 minutes at 37°C. HRMECs were pre-treated under various experimental conditions for 18 hours. Subsequently, 1 × 10 4 HRMECs were added to each well and incubated at 37°C for an additional 6 hours to allow for tube formation. The resulting structures were visualized and captured using a microscope (Leica). The lengths of the tubular structures were quantified with ImageJ software. Migration Assay In the migration assay, 2 × 10 4 HRMECs were placed in the upper chambers of transwell plates with an 8 µm pore size (Corning, Corning, NY, USA) and incubated at 37°C under the appropriate conditions for 24 hours. After incubation, the cells were fixed in 4% paraformaldehyde for 30 minutes at room temperature. After fixation, the cells were stained with crystal violet for 1 hour to facilitate visualization. Subsequently, the membranes were washed three times with PBS to remove excess stain, and any cells that remained on the upper surface of the membrane were gently wiped away. The cells migrating to the bottom side of the filter were then captured using a microscope (Leica). Proliferation Assay The proliferation assay was carried out using the Click EdU kit in accordance with the manufacturer's instructions (Beyotime, C0071S). HRMECs were treated with 10 µM EdU for a duration of 2 hours to label the newly synthesized DNA. After this incubation, the cells were fixed and permeabilized to facilitate staining. The cells were then exposed to the reaction buffer for 30 minutes at room temperature. To visualize the nuclei, the cells were stained with DAPI for 10 minutes at room temperature. Images were subsequently captured using a microscope (Leica). Proteomic Analysis The proteomic analysis was performed with the support of Shanghai Bioprofie Biotechnology (Shanghai, China). In summary, HRMECs were treated with either 0 ng/mL or 50 ng/mL of FGF2 for 24 hours, after which they were collected for data-independent acquisition (DIA) proteomics. Proteins were extracted using the SDT lysis buffer and subjected to ultrasonication on ice for 2 minutes. The resulting cell lysate was centrifuged at 16,000× g for 15 minutes at 4°C, and the protein concentration was measured using a BCA kit (Beyotime). Subsequent to digestion and desalting, the peptide concentrations were assessed using OD280 via a Nanodrop One device (Thermo Fisher Scientific). The samples were then analyzed using liquid chromatography tandem mass spectrometry for DIA with a reverse-phase high-performance liquid chromatography on the EASY-nLC system (Thermo Fisher Scientific, Bremen, Germany). The DIA mass spectrometry data were processed with Spectronaut 17 (Biognosys AG, Zurich, Switzerland), with a false discovery rate of less than 1%. Statistical Analysis Data are presented as mean ± SEM. Statistical analyses were performed using SPSS 20.0 software (IBM, Chicago, IL, USA). The two-tailed unpaired t test or Mann–Whitney U test were performed for comparison between two groups according to the normality of data. One-way ANOVA was applied to multiple groups as indicated (ns, no significance; * P < 0.05; ** P < 0.01; *** P < 0.001). Wild-type C57BL/6J mice were obtained from and housed in the animal center of North Sichuan Medical College. All animal procedures adhered to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and received approval from the ethics committee of North Sichuan Medical College. The model of OIR was established as previously described. In summary, neonatal mice were exposed to hyperoxia (75% ± 2% oxygen) from postnatal day 7 (P7) to P12, after which they were returned to normal room air (21% oxygen) until P17. Control mice were maintained in the room air. Both male and female pups were included in the study. AP-III-a4 (MedChemExpress, Monmouth Junction, NJ, USA; HY-15858) was freshly prepared in a solution of 0.1% DMSO (Solarbio, Beijing, China; D8371, diluted with saline). For AP-III-a4 treatment, mouse pups were randomly divided into two groups at P12. One group of pups were treated with AP-III-a4 (intraperitoneal injection, 5 mg/kg), whereas the other group were intraperitoneally treated with same amount of saline (with 0.1% DMSO) once daily from P12 to P16 for sample collection at P17. Human retinal microendothelial cells (HRMECs) were purchased from Cell Systems (ACBRI 181, Seattle, WA, USA) and grown in EGM-2 Endothelial Cell Growth Medium-2 BulletKit (C3162, Lonza, Basel, Switzerland). Recombinant human FGF2 were purchased from R&D systems (Minneapolis, MN, USA; 233-FB). AP-III-a4 was dissolved in 0.1% DMSO and added at a concentration of 10 µM. The relevant vehicle medium (with 0.1% DMSO) was used as control. The cells were maintained under suitable conditions to ensure their growth and functionality, adhering to established cell culture protocols. ATP levels were measured using an ATP assay kit (Abcam, ab83355) following the guidelines provided by the manufacturer. Briefly, 1 × 10 6 cells for each sample were harvested using ATP assay buffer. After pipetting up and down, collect supernatant after centrifuging 5 minutes at 4°C at 13,000× g . We added 50 µL of Reaction Mix into each standard and sample wells and added 50 µL of Background Reaction Mix into the background control sample wells. After incubating at room temperature for 30 min, we measured the output at 570 nm using a Varioskan LUX Microplate reader (Thermo Fisher Scientific, Waltham, MA, USA). The glycolytic effect was calculated through extracellular acidification rate using a glycolysis kit (Abcam, Cambridge, UK; ab197244) in accordance with the manufacturer's instructions. In brief, HRMECs were plated in 96-well plates at a density of 4 × 10 4 cells per well. After a 24-hour treatment under various conditions, the cells were washed twice with respiration buffer. Then, 150 µL of respiration buffer was added to all wells containing cells, as well as to blank control wells. Each sample well received 10 µL of reconstituted Glycolysis Assay Reagent, and 10 µL of respiration buffer was added to the blank control wells. Then, 2 µL test compound were added to the wells. Then the signals were detected at 380 nm excitation and 615 nm emission using a Varioskan LUX Microplate reader at 10 min intervals for 120 minutes. The glycolysis/OXPHOS rate in endothelial cells were detected using the Glycolysis/OXPHOS Assay Kit (Dojindo, Tabaru, Japan; G270) according to the manufacturer's instructions. We plated 4 × 10 3 HRMECs in 96-well plates and stimulated with 50 ng/mL FGF2 for 24 hours. We added 2-DG and Oligomycin (included in the kit) at a concentration of 25 mmol/L and 1.25 µmol/L, respectively. We took out 20 µL of the culture medium and incubated it with 80 µL lactate working solution for lactate assays. The remaining cells and medium were incubated with 100 µL ATP working solution at 25°C for 10 minutes and then the relative light unit was detected using the Thermo Fisher Scientific Varioskan LUX Microplate reader. Eyeballs or cells were fixed in 4% paraformaldehyde at room temperature for 2 hours or 20 minutes, respectively. After fixation, the samples were carefully dissected and permeabilized with 0.5% Triton X-100, blocked with 3% BSA at room temperature for 1 hour, and then incubated with primary antibodies overnight at 4°C. Afterward, the samples were incubated with secondary antibodies at 37°C for 1 hour. The following primary antibodies were used: anti-CD31 (Abcam, ab9498, diluted 1:1000); anti-VWF (HUABIO, Woburn, MA, USA; HA722833); anti-GFAP (HUABIO, ET1601-23, 1:200); anti-NG2 (Abcam, ab279348, 1:100); anti-NeuN (Abcam, ab104224, 1:200); and anti-PDGFR-beta (HUABIO, ET1611-29, 1:100). The following secondary antibodies were used: Cy3-labelled goat anti-mouse IgG (H+L) (Beyotime, Jiangsu, China; A0521) and Alexa Fluor 488-labelled goat anti-rabbit IgG (H+L) (Beyotime, A0423). Images were acquired using a confocal microscope (Leica, Wetzlar, Germany) and analyzed using ImageJ software. Total RNA was extracted from retina tissues and HRMEC cells using TRIzol reagent (Invitrogen, San Diego, CA, USA). The qRT-PCR assays were conducted using RT Master Mix for qPCR (MedChemExpress) and SYBR Green qPCR Master Mix (MedChemExpress) on ABI 7500 Prism system (Applied Biosystems) following the manufacturer's instructions. Results were normalized to the expression levels of β-actin. The primers for the genes analyzed are listed in . Cell and retina lysates were prepared using RIPA buffer containing 1% PMSF. Protein concentration was determined using a BCA assay kit (Beyotime, P0012). For each sample, 20 µg of protein lysate was loaded and separated by SDS-PAGE, followed by transfer to 0.45 µm PVDF membranes. The membranes were blocked with 5% nonfat milk at room temperature for 2 hours and then incubated overnight at 4°C with primary antibodies. The following day, membranes were treated with HRP-conjugated secondary antibodies (Affinity Biosciences, Cincinnati, OH, USA; S0001) at room temperature for 1 hour. Signal detection was carried out using a Chemidoc imaging system (Bio-Rad, Hercules, CA, USA). The primary antibodies used in this analysis included: anti-FGF-2 antibody (Proteintech, Rosemont, IL, USA; 11234-1-AP, 1:1000), anti-ENO2 antibody (Proteintech, 10149-1-AP, 1:1000), and anti-β-actin antibody (Affinity Biosciences, AF7018, 1:5000). Short hairpin RNAs (shRNAs) for targeting ENO2 and negative controls were purchased from Shanghai Taitool Bioscience Co., Ltd. (Shanghai, China) HRMECs were transfected with ENO2-shRNA and Vehicle-shRNA at MOI of 30 for 8 hours following the manufacturer's protocols. Three days after transfection, the cells were selected by puromycin (2 µg/mL) for 3 days and then the cells were used for further experiments. For the tube formation assay, 96-well plates were precoated with 50 µL of Matrigel and allowed to gel for 40 minutes at 37°C. HRMECs were pre-treated under various experimental conditions for 18 hours. Subsequently, 1 × 10 4 HRMECs were added to each well and incubated at 37°C for an additional 6 hours to allow for tube formation. The resulting structures were visualized and captured using a microscope (Leica). The lengths of the tubular structures were quantified with ImageJ software. In the migration assay, 2 × 10 4 HRMECs were placed in the upper chambers of transwell plates with an 8 µm pore size (Corning, Corning, NY, USA) and incubated at 37°C under the appropriate conditions for 24 hours. After incubation, the cells were fixed in 4% paraformaldehyde for 30 minutes at room temperature. After fixation, the cells were stained with crystal violet for 1 hour to facilitate visualization. Subsequently, the membranes were washed three times with PBS to remove excess stain, and any cells that remained on the upper surface of the membrane were gently wiped away. The cells migrating to the bottom side of the filter were then captured using a microscope (Leica). The proliferation assay was carried out using the Click EdU kit in accordance with the manufacturer's instructions (Beyotime, C0071S). HRMECs were treated with 10 µM EdU for a duration of 2 hours to label the newly synthesized DNA. After this incubation, the cells were fixed and permeabilized to facilitate staining. The cells were then exposed to the reaction buffer for 30 minutes at room temperature. To visualize the nuclei, the cells were stained with DAPI for 10 minutes at room temperature. Images were subsequently captured using a microscope (Leica). The proteomic analysis was performed with the support of Shanghai Bioprofie Biotechnology (Shanghai, China). In summary, HRMECs were treated with either 0 ng/mL or 50 ng/mL of FGF2 for 24 hours, after which they were collected for data-independent acquisition (DIA) proteomics. Proteins were extracted using the SDT lysis buffer and subjected to ultrasonication on ice for 2 minutes. The resulting cell lysate was centrifuged at 16,000× g for 15 minutes at 4°C, and the protein concentration was measured using a BCA kit (Beyotime). Subsequent to digestion and desalting, the peptide concentrations were assessed using OD280 via a Nanodrop One device (Thermo Fisher Scientific). The samples were then analyzed using liquid chromatography tandem mass spectrometry for DIA with a reverse-phase high-performance liquid chromatography on the EASY-nLC system (Thermo Fisher Scientific, Bremen, Germany). The DIA mass spectrometry data were processed with Spectronaut 17 (Biognosys AG, Zurich, Switzerland), with a false discovery rate of less than 1%. Data are presented as mean ± SEM. Statistical analyses were performed using SPSS 20.0 software (IBM, Chicago, IL, USA). The two-tailed unpaired t test or Mann–Whitney U test were performed for comparison between two groups according to the normality of data. One-way ANOVA was applied to multiple groups as indicated (ns, no significance; * P < 0.05; ** P < 0.01; *** P < 0.001). FGF2 Is Significantly Upregulated in the Retina of OIR Mice and Promotes Angiogenesis In Vitro In pursuit of understanding the role of FGF2 in retinal neovascularization, the OIR model was developed as previously described. , Immunofluorescence assays confirmed successful development of OIR, as shown by pathological angiogenesis in the retina ( A). Using RT-PCR and western blot assays, we observed a remarkable increase of FGF2 expression at both the mRNA and protein levels in the retina of OIR mice as compared to the normal group ( B, C). To further elucidate the functional effects of FGF2 in angiogenesis, we stimulated HRMECs with recombinant human FGF2 protein. The purity of HRMECs were tested by immunofluorescence assays using various cell markers ( A–C). Our results demonstrated that stimulation of HRMECs with FGF2 at a concentration of 50 ng/mL significantly enhanced endothelial cell capabilities in tube formation, migration, and proliferation, emphasizing the critical role of FGF2 in promoting angiogenesis ( D–F). Collectively, these observations propose the significance of FGF2 in facilitating neovascularization. Proteomics Characteries the Differentially Expressed Proteins in Endothelial Cells in Response to FGF2 Stimulation To investigate the role of FGF2 in retinal neovascularization and its underlying mechanisms, we performed a DIA proteomic analysis using endothelial cells stimulated with 0 and 50 ng/mL of FGF2 ( A). Quality control analysis revealed that the identified peptides were distributed in a reasonable range . Principal component analysis showed that the samples in distinct groups were tightly clustered respectively . The results revealed that a total of 77 proteins were significantly upregulated in FGF2 stimulated endothelial cells, while 210 proteins showed notable downregulation, as illustrated by the scatter plots ( B). The top 50 differentially expressed proteins with high abundance were visualized using heat maps, providing a comprehensive overview of the expression changes ( C). Subsequent Kyoto Encyclopedia of Genes and Genomes pathway analysis indicated that the upregulated proteins were significantly enriched in pathways related to glycolysis and metabolic pathways, highlighting the metabolic shift induced by FGF2 stimulation ( D). Additionally, Gene Ontology analysis of upregulated proteins in FGF2-stimulated proteins corroborated these findings, revealing a substantial enrichment of proteins involved in metabolic process ( E). The downregulated proteins were enriched in pathways including apoptosis, p53 signaling pathway and others . These results collectively suggest that FGF2 stimulation may alter the metabolic pattern of endothelial cells, particularly through the upregulation of glycolytic activity, which may contribute to enhancing endothelial cell angiogenic capabilities. Upregulation of ENO2 in Endothelial Cells After FGF2 Stimulation Subcellular localization analysis indicated that a large number of proteins are predominantly situated in the cytoplasm ( A). Additionally, protein–protein interaction pathway analysis revealed that many of the differentially expressed proteins are associated with metabolic pathways. Among the upregulated proteins, it also identified that ENO2 plays potential roles in the metabolic processes ( B). Notably, ENO2 is an enzyme that serves as a key component of the glycolytic metabolic pathway. – Further experiments confirmed that ENO2 expression levels were significantly elevated at both the mRNA and protein levels in endothelial cells in response to FGF2 stimulation ( C, D). Furthermore, we found that ENO2 expression was also upregulated in the retina of OIR mouse compared with normal controls ( E, F). These results implied the importance of ENO2 in the metabolic pathways of endothelial cells in the context of FGF2-induced neovascularization. ENO2 Mediates FGF2-induced Glycolysis and Angiogenesis in Endothelial Cells Given that glycolytic metabolism is crucial for the normal function of endothelial cells, we hypothesized that FGF2 stimulation may promote angiogenesis by enhancing ENO2-modulated glycolysis and energy production in these cells. ATP serves as a vital indicator of cellular energy status, and measuring ATP levels allows us to assess the metabolic activity and overall energy state of endothelial cells. Additionally, the extracellular acidification rate primarily reflects cellular glycolytic activity. Our findings demonstrated that FGF2 stimulation resulted in an increase in ATP production as well as an elevation in extracellular acidification rate ( A, B), thereby confirming our hypothesis regarding the role of glycolysis in mediating FGF2-promoted angiogenesis. Then we blocked glycolysis/oxidative phosphorylation pathway in endothelial cells using 2-DG (glycolysis inhibitor) and oligomycin (OXPHOS inhibitor) respectively and detected the glycolysis/OXPHOS rate using a Glycolysis/OXPHOS Assay Kit. The results showed that 2-DG treatment inhibited FGF2-induced endothelial ATP/lactate production and suppressed its angiogenic abilities but oligomycin had no such effects ( A–E). These results demonstrated the essential role of glycolysis in FGF2-induced angiogenesis. To further explore the role of ENO2 in endothelial angiogenesis, we subsequently knockdown ENO2 in endothelial cells using lentivirus. Immunofluorescence assays confirmed the transfection efficiency ( C), with qRT-PCR and Western blot assays validated successful knockdown of ENO2 ( D, E). Then we exposed these lentivirus-transfected HRMECs to FGF2 stimulation. Notably, we observed that the knockdown of ENO2 led to a significant reduction in ATP levels and a decrease in extracellular acidification rate levels in FGF2-treated HRMECs ( F, G). Furthermore, downregulation of ENO2 resulted in impaired FGF2-induced endothelial cell tube formation, migration and proliferation capabilities ( H, I, J). These results underscore the essential role of ENO2 in regulating glycolytic metabolism and modulating angiogenesis in endothelial cells under FGF2 stimulation. Inhibition of ENO2 Reduces Glycolysis and Counteracts FGF2-induced Angiogenesis AP-Ⅲ-a4 is a well-identified inhibitor of ENO2. , The molecular formula of AP-Ⅲ-a4 was shown in A. We wondered whether inhibition of ENO2 with AP-Ⅲ-a4 would suppress FGF2-promoted endothelial angiogenesis. We found that endothelial cells treated with the ENO2 inhibitor AP-Ⅲ-a4 exhibited a notable decrease in both ATP levels and extracellular acidification rates ( B, C). Moreover, AP-Ⅲ-a4 effectively countered the pro-angiogenic effects of FGF2 on the tube formation, migration and proliferation of endothelial cells ( D–F). Furthermore, studies conducted in the OIR animal model have confirmed that AP-Ⅲ-a4 effectively suppressed pathological neovascularization in vivo, but it did not affect the proportion of avascular area ( G). Overall, inhibiting ENO2 leads to reduced glycolysis in endothelial cells and antagonizes angiogenesis induced by FGF2. These findings suggest that ENO2 represents a promising new therapeutic target for the treatment of pathological neovascularization. In pursuit of understanding the role of FGF2 in retinal neovascularization, the OIR model was developed as previously described. , Immunofluorescence assays confirmed successful development of OIR, as shown by pathological angiogenesis in the retina ( A). Using RT-PCR and western blot assays, we observed a remarkable increase of FGF2 expression at both the mRNA and protein levels in the retina of OIR mice as compared to the normal group ( B, C). To further elucidate the functional effects of FGF2 in angiogenesis, we stimulated HRMECs with recombinant human FGF2 protein. The purity of HRMECs were tested by immunofluorescence assays using various cell markers ( A–C). Our results demonstrated that stimulation of HRMECs with FGF2 at a concentration of 50 ng/mL significantly enhanced endothelial cell capabilities in tube formation, migration, and proliferation, emphasizing the critical role of FGF2 in promoting angiogenesis ( D–F). Collectively, these observations propose the significance of FGF2 in facilitating neovascularization. To investigate the role of FGF2 in retinal neovascularization and its underlying mechanisms, we performed a DIA proteomic analysis using endothelial cells stimulated with 0 and 50 ng/mL of FGF2 ( A). Quality control analysis revealed that the identified peptides were distributed in a reasonable range . Principal component analysis showed that the samples in distinct groups were tightly clustered respectively . The results revealed that a total of 77 proteins were significantly upregulated in FGF2 stimulated endothelial cells, while 210 proteins showed notable downregulation, as illustrated by the scatter plots ( B). The top 50 differentially expressed proteins with high abundance were visualized using heat maps, providing a comprehensive overview of the expression changes ( C). Subsequent Kyoto Encyclopedia of Genes and Genomes pathway analysis indicated that the upregulated proteins were significantly enriched in pathways related to glycolysis and metabolic pathways, highlighting the metabolic shift induced by FGF2 stimulation ( D). Additionally, Gene Ontology analysis of upregulated proteins in FGF2-stimulated proteins corroborated these findings, revealing a substantial enrichment of proteins involved in metabolic process ( E). The downregulated proteins were enriched in pathways including apoptosis, p53 signaling pathway and others . These results collectively suggest that FGF2 stimulation may alter the metabolic pattern of endothelial cells, particularly through the upregulation of glycolytic activity, which may contribute to enhancing endothelial cell angiogenic capabilities. Subcellular localization analysis indicated that a large number of proteins are predominantly situated in the cytoplasm ( A). Additionally, protein–protein interaction pathway analysis revealed that many of the differentially expressed proteins are associated with metabolic pathways. Among the upregulated proteins, it also identified that ENO2 plays potential roles in the metabolic processes ( B). Notably, ENO2 is an enzyme that serves as a key component of the glycolytic metabolic pathway. – Further experiments confirmed that ENO2 expression levels were significantly elevated at both the mRNA and protein levels in endothelial cells in response to FGF2 stimulation ( C, D). Furthermore, we found that ENO2 expression was also upregulated in the retina of OIR mouse compared with normal controls ( E, F). These results implied the importance of ENO2 in the metabolic pathways of endothelial cells in the context of FGF2-induced neovascularization. Given that glycolytic metabolism is crucial for the normal function of endothelial cells, we hypothesized that FGF2 stimulation may promote angiogenesis by enhancing ENO2-modulated glycolysis and energy production in these cells. ATP serves as a vital indicator of cellular energy status, and measuring ATP levels allows us to assess the metabolic activity and overall energy state of endothelial cells. Additionally, the extracellular acidification rate primarily reflects cellular glycolytic activity. Our findings demonstrated that FGF2 stimulation resulted in an increase in ATP production as well as an elevation in extracellular acidification rate ( A, B), thereby confirming our hypothesis regarding the role of glycolysis in mediating FGF2-promoted angiogenesis. Then we blocked glycolysis/oxidative phosphorylation pathway in endothelial cells using 2-DG (glycolysis inhibitor) and oligomycin (OXPHOS inhibitor) respectively and detected the glycolysis/OXPHOS rate using a Glycolysis/OXPHOS Assay Kit. The results showed that 2-DG treatment inhibited FGF2-induced endothelial ATP/lactate production and suppressed its angiogenic abilities but oligomycin had no such effects ( A–E). These results demonstrated the essential role of glycolysis in FGF2-induced angiogenesis. To further explore the role of ENO2 in endothelial angiogenesis, we subsequently knockdown ENO2 in endothelial cells using lentivirus. Immunofluorescence assays confirmed the transfection efficiency ( C), with qRT-PCR and Western blot assays validated successful knockdown of ENO2 ( D, E). Then we exposed these lentivirus-transfected HRMECs to FGF2 stimulation. Notably, we observed that the knockdown of ENO2 led to a significant reduction in ATP levels and a decrease in extracellular acidification rate levels in FGF2-treated HRMECs ( F, G). Furthermore, downregulation of ENO2 resulted in impaired FGF2-induced endothelial cell tube formation, migration and proliferation capabilities ( H, I, J). These results underscore the essential role of ENO2 in regulating glycolytic metabolism and modulating angiogenesis in endothelial cells under FGF2 stimulation. AP-Ⅲ-a4 is a well-identified inhibitor of ENO2. , The molecular formula of AP-Ⅲ-a4 was shown in A. We wondered whether inhibition of ENO2 with AP-Ⅲ-a4 would suppress FGF2-promoted endothelial angiogenesis. We found that endothelial cells treated with the ENO2 inhibitor AP-Ⅲ-a4 exhibited a notable decrease in both ATP levels and extracellular acidification rates ( B, C). Moreover, AP-Ⅲ-a4 effectively countered the pro-angiogenic effects of FGF2 on the tube formation, migration and proliferation of endothelial cells ( D–F). Furthermore, studies conducted in the OIR animal model have confirmed that AP-Ⅲ-a4 effectively suppressed pathological neovascularization in vivo, but it did not affect the proportion of avascular area ( G). Overall, inhibiting ENO2 leads to reduced glycolysis in endothelial cells and antagonizes angiogenesis induced by FGF2. These findings suggest that ENO2 represents a promising new therapeutic target for the treatment of pathological neovascularization. The functional vascular network of the retina is critical for normal vision, with pathological angiogenesis being a significant factor in permanent vision loss associated with various disorders. The limitations of current treatment options for pathological neovascularization highlight the necessity for deeper exploration into the underlying mechanisms of these disorders and the pursuit of novel therapeutic strategies. In our study, we highlight the critical involvement of ENO2-mediated glycolysis in FGF2-induced angiogenesis. Proteomic analysis of endothelial cells stimulated by FGF2 indicated a significant enrichment of upregulated proteins associated with glycolysis and ENO2 played a vital role in this process. Additionally, the angiogenic effects of FGF2 were further inhibited through ENO2 knockdown or the application of ENO2 inhibitors. Our findings reveal a previously unrecognized regulatory mechanism wherein ENO2-mediated glycolysis plays a pivotal role in the angiogenic effects of FGF2. During the angiogenesis process, endothelial cells undergo metabolic reprogramming, which is considered a crucial mechanism for supporting cell growth and vascularization. Previous studies have shown that endothelial cells can adapt to the huge energy demands of neovascularization by increasing glucose uptake and enhancing glycolytic activity. Glycolysis serves as a primary pathway for endothelial cells to obtain energy rapidly. The metabolic byproducts generated from glycolysis, such as lactate and ATP, not only provide the required energy, but also influence the behavior of endothelial cells, promoting processes such as proliferation, migration, and tube formation, thereby facilitating angiogenesis. , , In our study, proteomic analysis revealed that multiple proteins were significantly upregulated in endothelial cells in response to FGF2 stimulation, with significant enrichment in pathways related to glycolysis and metabolic processes. Our findings indicated that glycolysis also played roles in FGF2-induced angiogenesis. FGF2 is recognized as a critical proangiogenic factor with a vital role in the regulation of vascular development and repair. , Our research reveals that FGF2 stimulation leads to metabolic reprogramming in vascular endothelial cells. FGF2 effectively activates glycolytic signaling pathways, thereby facilitating angiogenesis. Additionally, it has been reported that FGF2 plays a crucial role in the regulation of glycolysis in keloid fibroblasts. Under conditions of hypoxia, endothelial cells often rely on glycolysis rather than oxidative phosphorylation for energy production. Previous studies demonstrated that hypoxia is a common pathologic factor for retinal neovascularization diseases. By stimulating glycolytic pathways, FGF2 helps these cells adapt to low-oxygen environments, thus facilitating the formation of new blood vessels. FGF2 plays a crucial role in angiogenesis by activating glycolysis, which drive the metabolic reprogramming of endothelial cells. Previous research has shown that FGF2 activates the PI3K/Akt pathway, playing a critical role in the regulation of endothelial cell metabolism, proliferation, and survival. Akt enhances the function of glycolytic enzymes, such as hexokinase and phosphofructokinase, thereby increasing the glycolytic activity of these cells. , Our study highlights the significant regulatory function of ENO2 in the process of angiogenesis, demonstrating that its involvement in glycolysis is crucial for FGF2-mediated angiogenesis. ENO2, also known as γ-enolase, is part of the enolase enzyme family that is vital for the glycolytic pathway. This enzyme facilitates the transformation of 2-phosphoglycerate into phosphoenolpyruvate, contributing to the generation of ATP during glycolysis. Research has shown that ENO2 is expressed in endothelial cells, indicating its role in the metabolic functions of these cells. Our study identified that ENO2 plays a key role in the glycolytic pathway, thereby enhancing glucose metabolism and energy production within endothelial cells. A heightened glycolytic rate is strongly associated with the proliferation, migration, and tube formation abilities of endothelial cells, with ENO2 playing a vital regulatory role in these processes. This study also has a few limitations. First, Kyoto Encyclopedia of Genes and Genomes analysis showed that ENO2 is also involved in HIF-1 signaling pathway and it is related to FGF2-induced angiogenesis, but its exact roles and mechanisms require further investigation. Second, we only investigated the effects of the ENO2 inhibitor on neovascular changes in the OIR model; further validations using other retinal pathological models are needed. Additionally, because the pathological processes of OIR may be different from human proliferative retinopathies, the clinical relevance of our findings is limited. The efficacy and safety of inhibiting ENO2 in clinical applications requires further investigation. In conclusion, this study highlights the critical role of FGF2 in retinal neovascularization and its underlying mechanisms. We found that ENO2-induced glycolysis is a key mechanism through which FGF2 promotes angiogenesis. Inhibition of ENO2 significantly reduces glycolysis and counteracts FGF2-induced angiogenic effects. These findings suggest that targeting ENO2 may provide a promising therapeutic approach for treating pathological neovascularization in retinal diseases. Supplement 1
Identification of pharmacogenetic variants from large scale next generation sequencing data in the Saudi population
f9d5f705-97f0-4054-b288-3af44592d263
8797234
Pharmacology[mh]
Pharmacogenomics (PGx) studies genetic variations in an individual’s drug metabolizing enzymes, associating these with adverse drug events or the level of drug response . Drug efficacy and toxicity may be predicted from the genetic background of individuals, particularly in respect of the Cytochrome P450 (CYP) family of liver enzymes . These enzymes catalyze the conversion of substances that are metabolized by our bodies, including pharmaceuticals . Overall, the efficacy of a drug is related to its Absorption, Distribution, Metabolism and Excretion (ADME) . The efficacy of a drug may also be associated with drug target polymorphisms. Drug targets can include receptors, enzymes and membrane transporters . CYPs are responsible for deactivation of many drugs through direct metabolic activity or via facilitation of excretion and thus play a central role in ADME related efficacy. CYPs are also important in enzymatic conversion of some drugs from their native to bioactive forms . These differences in drug metabolism highlight the current trend towards individualized pharmacotherapy, such that the right drug is delivered at the right dose to the right patient. A standard dose of a given drug is not always safe, effective or economical in an individual patient . The high incidence of adverse drug events (ADEs) represents a heavy burden for the US health care system. Almost 7 million emergency department visits are related to ADEs each year with an estimated cost of $3.5 billion annually . Large scale genomic studies provide opportunities to associate drug responses with individual pharmacogenetic profiles. Such knowledge may improve drug efficacy, result in better outcomes, and in some instances, prevents life-threatening adverse drug events. Dosing no longer needs to be based on the average drug responses of a patient population but can be personalized, taking into consideration individual pharmacogenomic and environmental variation. There are well-established drug-gene interactions, that include but are not limited to, clopidogrel ( CYP2C19 ), warfarin ( CYP2C9 , VKORC1 and CYP4F2 ), thiopurines ( TPMT , NUDT15 ), tacrolimus ( CYP3A5 ) and fluorouracil ( DPYD ) . These medications are commonly used globally with Saudi Arabia being no exception . Protocols targeting use of the right drug at the right dose in the right person, based on genomic data to personalize treatment, have already been clinically implemented successfully in other countries, e.g. the RIGHT protocol in the US and U-PGx project in Austria, Spain, Great Britain, Greece, Italy, Netherlands and Slovenia . The allelic frequencies of genes encoding drug-metabolizing enzymes and their phenotypic consequences may vary considerably between ethnic groups. The impact of these allelic variants has been well studied in Caucasians and some other ethnicities, yet poorly in Arabs . This study expands pharmacogenetic knowledge of the Arab population. Description of the allelic spectrum of pharmacogenes, both known and novel variants, their frequency, and phenotypic designation in Saudi nationals, will provide a basis for better clinical management in this population. During the last decade technological advances have enabled comprehensive mapping of human pharmacogenes . Next Generation Sequencing (NGS) and High Performance Computing (HPC) are two technologies that have enhanced this field . The mining of variants using sequence data from population-based genome programs, provide an opportunity to characterize the pharmacogenomic profiles of each of these groups. Here we describe our findings from the Saudi population. Mining of NGS data from a total of 11,889 (1,928 PGx gene panels and of 9,961 exomes) unrelated individuals was used to impute allele and haplotype frequencies. We analyzed frequencies of 82 haplotypes distributed across 8 pharmacogenes . Nineteen CYP2C9 variants (*2 , *3 , *5 , *6 , *7 , *8 , *9 , *11 , *12 , *14 , *24 , *32 , *33 , *36 , *39 , *43 , *44 , *45 , *60 ) were identified that jointly accounted for 21.1% of all CYP2C9 alleles in Saudi Arabs, however, only CYP2C9*2 with a minor allele frequency (MAF) of 13.4% and CYP2C9*3 (MAF = 5.3%) were relatively common. Fifteen variant alleles (*2 , *3 , *4A , *6 , *8 , *9 , *12 , *13 , *15 , *16 , *17 , *24 , *28 , *30 , *34 ) were found in CYP2C19 of which *17 and *2 were the most common: 25.9% and 9.6%, respectively. A splice site variant (rs7767746) that is the core allele for CYP3A5 *3 was present in 84.7% of the population. Three other alleles ( *6 , *7 and *8 ) in CYP3A5 showed MAFs from <0.1% to 2.4%. In CYP4F2 , 44.4% of Saudi individuals harbor a *3 allele, the remaining population being wild type. We detected four VKORC1 alleles; the most common was VKORC1*2 (MAF = 53.7%) followed by rs7294, 3730G>A (MAF = 29.2%). Two other VKORC1 variants: 106G>T (rs61742245) and 196G>A (rs72547529) were less commonly observed with MAFs of 2.1% and <0.1%, respectively. Genetic polymorphisms in DPYD and TPMT were rare in the Saudi population. We identified eight variants (rs67376798, rs3918290, rs1801266, rs115232898, rs112766203.1, rs72549304, rs146356975, rs56038477) for DPYD and ten star alleles for TPMT although the overall MAFs for both these were low, 0.7% ( DPYD ) and 0.9% ( TPMT ). Two alleles ( *3 and *5 ) were identified in NUDT15 . The *3 allele was present with a MAF of 1.8%; *5 being much less common (MAF<0.1%), in the population. Functional consequences predicted for PGx alleles in the Saudi population were found predominantly in CYP genes. In CYP3A5 we found the highest number (87.5%) of inactive alleles as a result of the frequently observed intronic splice site CYP3A5 *3 variant. CYP4F2 showed decreased function alleles in 44.4% of individuals, whereas, in two other CYP genes ( CYP2C9 and CYP2C19 ) reduced function alleles (inactive or decreased) were less common, being 20.6% and 10.1%, respectively. In other prominent PGx genes, allele function was much more conserved, with only 1.8%, 0.7% and 0.7% of NUDT15 , TMPT and DPYD variants predicted to affect activity, respectively. In contrast, functionally, VKORC1 was highly polymorphic with 53.7% of Saudi individuals harboring variants predicted to result in decreased activity, whereas 31.3% carry variants leading to increased metabolic activity ( and , ). Based on genotypic data and predicted functional consequences of variant alleles we defined genotype-to-phenotype correlations . The phenotyping algorithms were derived from CPIC guidelines which were available only for CYP2C19 , CYP2C9 , CYP3A5 , TPMT , NUDT15 , and DPYD . Extensive metabolizer (EM) was the most frequent category for DPYD (98.7%), TPMT (97.8%) and NUDT15 (95.6%). EM status was also the highest (64.5% and 38.3%) for CYP2C9 and CYP2C19 although a significant number of remaining individuals (35.4% and 61.6%) are predicted to carry an altered drug metabolizer status for these two genes. CYP3A5 non-expressers (poor metabolizer, PM) represented 77.8% of the population. The percentage of Saudi individuals who carry actionable PGx variant(s) is summarized in . Of the 1928 Saudi individuals (genotyped using the PGx gene panel), 99.2% carry at least one actionable PGx allele, with a maximum of 5 detected in 1.1% of the population. Of all 62 previously reported, predicted to be pathogenic (based on a two-fold scoring) rare variants (MAF<1%), four (1 stop-gain, 2 frameshift and 2 missense variants) with an aggregated frequency of 0.67% were uniquely observed in Saudi individuals when compared with other populations (European, Finish, Hispanic, African, South Asian, East Asian, Ashkenazi Jews and Arabs). Two missense variants were only present in Arabs ( and ). Next, we identified 46 novel sequence alterations in seven of the eight PGx genes studied. They included 5 stop-gain, 5 splice site, 1 frameshift, and 35 missense variants with an ADME score of ≥84%. DPYD revealed the largest number (n = 19) of novel alterations, the most frequent being DPYD :p.Ile971Thr, having a MAF of 0.00055 ( and ). Inter-individual differences in drug efficacy drive current trends towards personalized pharmacotherapy targeting delivery of the right drug, at the right dose to the right patient. A standard dose of a given drug is not always safe, effective or economical in an individual patient . Mining of large-scale NGS data is a very powerful tool for cataloging the range and frequency of genetic variation in populations . We used whole exome and PGx gene panel NGS data to estimate pharmacogenetic diversity in the Saudi population, thus far poorly recorded in current databases, compared to many other ethnic groups. Our analysis provides the most comprehensive overview of PGx variability (predicted to be clinically relevant), of 8 phase I and phase II enzymes, in the Saudi population, published to date. We found that 61.6% of the Saudi cohort carry actionable CYP2C19 alleles, which may be associated with an increased risk of major adverse cardiovascular events during antiplatelet therapy with clopidogrel. In this instance ADEs range from stent thrombosis in poor and intermediate metabolizers, to bleeding risk in rapid and ultrarapid metabolizers. This drug was prescribed to several thousand patients who were treated at King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia (KFSHR&RC) last year alone. Similar to European, African and Ashkenazi populations, CYP2C19 * 17 was the most frequent allele. CYP2C19 * 30 was unique to Arabs, with CYP2C19 * 13 and CYP2C19 * 15 detected in Saudi individuals, observed only in Africans ( *13 and *15 ) and Jews ( *15 ). Actionable CYP2C9 alleles associated with metabolism of warfarin were identified in 35.4% of Saudis. Furthermore, the CYP4F2*3 and VKORC1*2 variants responsible for increased and decreased warfarin activity respectively, were strongly represented in our study population. CYP4F2 acts as an important counterpart to VKORC1 in limiting excessive accumulation of vitamin K . Inappropriate warfarin dosing underlies one of the most frequently reported adverse events, acute haemorrhages being one of the most common emergency visits in the US . At KFSH&RC alone, warfarin is prescribed for several thousand patients every year. According to updated CPIC guidelines, genotypes of CYP2C9 , VKORC1 and CYP4F2 , should be considered together to estimate therapeutic warfarin dosing. One of the key factors strongly considered in dosing algorithms include ethnicity and population related genetic information. The majority of PGx data underpinning these guidelines arises from European, African American and East Asian ancestry . Very little is known about pharmacogenetics in Arabs. Our study shows that the frequency of CYP2C9*2 , *3 , and VKORC1*2 in the Saudi population is similar to that of Europeans. Other CYP2C9 variants common in Africans and present in Europeans (e.g. CYP2C9*5 , *6 , *8 , and *11 ), that should be considered in warfarin dosing algorithms due to associated bleeding risk, show low occurrence in the Saudi population. Based on our findings, and subject to clinical validation, dosing recommendations for warfarin in Saudi patients should follow those for non-African ancestry, as recommended in CPIC guidelines . However, studying the impact of a significantly higher frequency, of the functionally inactive CYP2C9*33 allele, on warfarin dosing in the Saudi population is strongly indicated. The vast majority of the Saudi population carries the CYP3A5*3 variant that results in a truncated mRNA with loss of protein expression . Frequency of the *3 allele varies extremely across human populations and is correlated with distance from the equator. Equatorial populations may experience shortage of water and a sodium retaining phenotype in hot climates . Our findings show frequency of this allele in the Saudi population, to be similar to that in six other populations (Ashkenazi, European, American, Finish and both Asians) . This gene catalyzes the metabolism of tacrolimus, a mainstay immunosuppressant. Patients with the CYP3A5*3 allele require the standard dose of this medication . At KFSH&RC alone ~4000 patients received tacrolimus last year, and 22.2% of these may be normal metabolizers (2.6%) or intermediate metabolizers (19.6.%), requiring an increased tacrolimus dose to achieve a successful outcome. Clinical validation of this would be required, particularly given relatedness of donors and recipients in a consanguineous population, where histoincompatibility may be less than observed elsewhere. Genetic variation in TPMT and NUDT15 are strongly linked to the risk for adverse reactions, to thiopurines commonly used for treatment of malignant and non-malignant conditions . The “normal” starting doses are generally high based on clinical trials which are enriched in wild-type individuals. Full doses are tailored for normal metabolizers and may cause acute toxicity in intermediate and poor metabolizers . Thiopurine tolerance is highly correlated with genetic ancestry . The functionally inactive TPMT*3A allele is much less common in Saudi individuals relative to American, European and Ashkenazi populations . CPIC guidelines recommend a customized dose of thiopurines in compound intermediate metabolizers (intermediate metabolizers in both TMPT and NUDT15 . We identified 0.03% (n = 3) compound intermediate metabolizers in Saudi population. Genetic variation in DPYD is a strong predictor of adverse risk related to use of the chemotherapeutic agent fluorouracil, commonly used in the treatment of various malignancies. Many cases have been reported of severe toxicities or even lethal outcome due to the DPYD poor or null metabolizer phenotype . In our study we identified 1.3% of Saudi individuals who carry either a functionally normal allele plus one null or one functionally decreased allele, and would be predicted to be intermediate metabolizers. Reduced doses of fluorouracil may be indicated for these individuals . More importantly our study detected in the Saudi population, the DPYD rare pathogenic mutation (c.257C>T) that may be responsible for severe toxicity in heterozygous patients or lethality in homozygous cancer patients treated with fluoropyrimidines . We found this variant to be significantly enriched in the Saudi population with approximately 1 in every 333 individuals heterozygous for this allele. This DPYD allele is also present in the Qatari population (0.3%) whereas it is very rare in other populations, with frequencies (relative to the Saudi population) <36-fold in Americans, <52-fold in Europeans, <99-fold in South Asians, and was absent in other compared populations ( and ). Given the high rate of consanguinity (~60%) in Saudi Arabia, we can expect relative to outbred populations, a higher incidence of homozygotes for the DPYD (c.257C>T) mutation. Consanguinity increases the probability of a mate to be a carrier of the same recessive allele . Thus, genotyping DPYD in the Saudi population may have greater clinical relevance. In most of the pharmacogenes screened we observed alleles shared with other Arabs , and some unique to the Saudi population. Amongst those shared with other Arabs, some were observed at significantly (p<0.05) different frequencies ( and Tables). Large-scale NGS data mining enables discovery of novel and rare pharmacogenetic alterations . They are often population specific alleles and are not incorporated within current pharmacogenomic assays. Our study shows that such variants are present in the Saudi population, with computational algorithms predicting their functional significance in multiple instances. They may significantly add to knowledge of potentially actionable variants in ADME genes within the Saudi population and should be further investigated. Novel variants require experimental validation to test their functional effects in drug response . Our study highlights the value of mining large NGS databases as a powerful tool, to improve knowledge of genomic variation within ADME genes, and stimulate their further investigation and eventual implementation in clinical practice. The data we present from one of the larger Middle Eastern countries, provides the most comprehensive overview of pharmacogenetic variants in Arabs, who to date are underrepresented in international genomic databases. We believe it will have both immediate and near-term clinical implications, expanding the application of pharmacogenetics and the practice of precision or individualized medicine in Arab patients. Study limitations The clinical impact of variants identified by this study remain in question as information from relevant clinical trials are limited. While PGx variants are predicted to be actionable in other populations, one cannot assume that these variants will ultimately have the same impact in the Saudi population without clinical verification. Another limitation of our study is the technical constraints of exome sequencing; non-coding regions and loci with high genomic complexity are poorly, or not covered at all. Structural changes and copy number variations which may be relevant are not reliably identified by whole exome or gene panel sequencing. Thus, we were not able to call star alleles with whole gene deletions, duplications or hybrids that are common in the assignment of CYP2D6 alleles. Accordingly, we did not include CYP2D6 in our analysis. Furthermore, actionable variants located in non-coding regions CYP2C19 rs12248560, CYP3A5 rs776746, VKORC1 rs9934438, VKORC1 rs7294, DPYD rs67376798 were not covered by whole exome sequencing, our data for these being exclusively obtained from the PGx custom gene panel only. The clinical impact of variants identified by this study remain in question as information from relevant clinical trials are limited. While PGx variants are predicted to be actionable in other populations, one cannot assume that these variants will ultimately have the same impact in the Saudi population without clinical verification. Another limitation of our study is the technical constraints of exome sequencing; non-coding regions and loci with high genomic complexity are poorly, or not covered at all. Structural changes and copy number variations which may be relevant are not reliably identified by whole exome or gene panel sequencing. Thus, we were not able to call star alleles with whole gene deletions, duplications or hybrids that are common in the assignment of CYP2D6 alleles. Accordingly, we did not include CYP2D6 in our analysis. Furthermore, actionable variants located in non-coding regions CYP2C19 rs12248560, CYP3A5 rs776746, VKORC1 rs9934438, VKORC1 rs7294, DPYD rs67376798 were not covered by whole exome sequencing, our data for these being exclusively obtained from the PGx custom gene panel only. Manuscript was based on access to fully anonymized data from Saudi Human Genome Project for which waiver of consents was granted by the IRB of King Faisal Specialist Hospital and Research Center. The dataset used for mining of pharmacogenomic variants comprised 9,961 exomes and 1,928 PGx custom gene panels (genes are listed in ), from unrelated Arab individuals sequenced by the Saudi Human Genome Program (SHGP) between 2015 and 2019, as part of a comprehensive investigation of rare diseases in the Saudi population . We studied eight genes for which the Clinical Pharmacogenetics Implementation Consortium (CPIC) guidelines are curated ( https://cpicpgx.org/guidelines/ ) and present on FDA labels ( https://www.fda.gov.Drugs/ScienceReseach/ucm572698 ). CYP star allele assignment and their clinical function was derived from Pharmacogene Variation Consortium ( https://www.pharmvar.org/genes/ ) and CPIC allele functional tables. Metabolizer types were inferred based on CPIC guidelines and the Pharmacogenomics Knowledgebase (PharmGKB) https://www.pharmgkb.org/ and they were defined as follows: ultrarapid metabolizer (UM), intermediate metabolizer (IM), extensive/normal metabolizer (EM), poor metabolizer (PM), rapid metabolizer (RM), IM to EM and PM to EM. Our method for Star allele calling was based upon using the Stargazer algorithm (v.1.0.8). This algorithm performs statistical haplotype phasing using Beagle with reference samples from the 1000 Genomes Project . The Beagle method is based on localized haplotype-cluster model, which is an empirical linkage disequilibrium model that can take the local structure in the data into consideration. The Beagle algorithm is accurate and runs fast due to the use of an EM-based algorithm that literately fits the best model to the data. Afterwards, the phased haplotypes computed by Beagle are then matched to publicly available star allele information, mostly in the PharmVar database ( https://www.pharmvar.org ) and PharmGKB ( https://www.pharmgkb.org/ ). Finally, Stargazer reports the star allele findings in a tabular format along with prediction of the related metabolizer information. Frequencies of intronic and UTR variants were covered only by the PGx panel and their frequency was calculated based on the cohort of 1928 individuals. Variants with MAF <1% were defined as rare and genetic alterations with frequencies that exceeded the observed frequencies in other populations (European, Finish, Hispanic, African, South Asian, East Asian, Ashkenazi Jews and Arabs) by >20-fold were considered as being “Saudi-specific”. A Chi-square test was used to determine the statistical difference for allele frequencies between different populations. A p -value less than 0.05 was considered significant. Next, we classified alleles as novel if they were not observed in: 1000 Genomes (phase3), gnomAD (v.3.1.1), Exac (v.0.3) and Kaviar (v.160204). Functional consequence of PGx rare Saudi-specific and novel variants was predicted using a two-fold approach. Any variants with a high IMPACT rating, such as frameshift indels or stop loss variants were considered to be deleterious . We then applied the ADME-optimized framework that is an ensemble of deleteriousness prediction methods for predicting deleteriousness in pharmacogenes. We used 18 prediction algorithms to compute the ADME scores including CADD, SIFT, PolyPhen, LRT (likelihood ratio test), MutationAssessor, FATHMM, FATHMM-MKL, PROVEAN, VEST3, DANN, MetSVM, MetaLR, GERP++, SiPhy, PhyloP-vertebrate, PhyloP-mammalian, PhastCons-vertebrate, and PhastCons-mammalian. ADME scores larger than 84% were considered to affect pharmacogene functionality . We used phenotypes generated from Stargazer for CYP2C19 , CYP2C9 , CYP3A5 , DPYD , NUDT15 and TPMT to determine the percentage of individuals predicted to have actionable PGx variants. For VKORC1 (rs9934438) and CYP4F2*3 (rs2108622), individuals carrying heterozygous (CT) or homozygous (TT) and heterozygous (GA) or homozygous (AA), respectively were considered to have an actionable variant in those genes. S1 Table Actionable PGx variants identified in the Saudi population. (XLSX) Click here for additional data file. S2 Table List of rare PGx variants. (XLSX) Click here for additional data file. S3 Table List of novel PGx variants in the Saudi population. (XLSX) Click here for additional data file. S4 Table List of genes in the custom PGx gene panel. (XLSX) Click here for additional data file. S5 Table Classification thresholds and prediction algorithms for novel PGx variants. (XLSX) Click here for additional data file.
The Certificate of Added Competence credentialling program in family medicine: a descriptive survey of the family physician perspective of enhanced skill practices in Canada
8b82382c-d01a-46ce-b5cc-e15eb2515abf
10690006
Family Medicine[mh]
As the country’s professional home for family physicians, the College of Family Physicians of Canada (CFPC, “The College”) encourages a primary healthcare system that is accessible, high-quality, comprehensive, and continuous. Given that there is significant heterogeneity in the scope of practice of family physicians across the country, the College recognizes an opportunity to support this vision by promoting collaborative relationships that leverage the strengths of individual family physicians. , Indeed, the benefits of collaborative team-based care were enumerated within the results of the College’s recent Outcomes of Training Project and are emphasized strongly as desired outcomes of their subsequent plan for curricular expansion in post-graduate family medicine training, as well as within their professional profile for family physicians. In this regard, it is notable that some family physicians provide full-scope, generalist care to patients, while others focus some or all of their practice in certain domains of care. These latter practitioners are often enhanced skill (ES) physicians, individuals who have developed advanced competence in a domain of care that falls outside the typical family medicine scope (e.g., anesthesia) or that reflects specialized advances in traditional aspects of primary care (e.g., addictions medicine). While there are numerous ways to designate a family physician as an enhanced skilled practitioner, the College offers its own credential: The Certificate of Added Competence (CAC). The history of the College designation for added competence began in 1982 with the establishment of the CAC in Emergency Medicine (EM). In 2015, a time-limited application was opened for the physicians who had previously acquired competence, either through residency training or through practice experience and professional development, in four new domains of care: Palliative Care (PC), Care of the Elderly (COE), Family Practice Anesthesia (FPA), and Sport and Exercise Medicine (SEM). Subsequent to these, certificates in Addictions Medicine (AM), Obstetrical Surgical Skills (OSS), and Enhanced Surgical Skills (ESS) have been established and awarded. In August 2021, the College shared with the research team that it had awarded 6,045 CACs (EM = 3,842; PC = 617; COE = 425; FPA = 430; SEM = 360; AM = 292; OSS = 53; ESS = 26). With respect to the delivery of high quality, comprehensive community adaptive care, family physicians with CACs work to extend the services they provide to their own patients and/or in conjunction with generalist family physicians to bring specific expertise where it otherwise might not be accessible. , Indeed, when these enhanced skill physicians ground their practice in the needs of the local community and work in collaboration with other healthcare providers, numerous benefits to comprehensive care are realized within that community: patients need to travel less distance for specialized care, community physicians are afforded an important resource for navigating specific healthcare needs, and the continuity of care between patients and their primary physicians is protected. However, along with these benefits, come concerns that the certificates are also promoting unintended practice behaviours. In particular, the CACs might be encouraging new family medicine practitioners to move away from comprehensive, community-adaptive care and a practice philosophy founded on generalist principles towards practices with an increased focus on specialization. - Through a recent multiple-case case study, we affirmed that the four CACs introduced in 2015 (i.e., PC, SEM, COE, FPA) are working to move primary care family medicine in the country both towards and away from comprehensive, continuous forms of practice. This previous qualitative work involved document review and interviews with enhanced skill and generalist physicians, trainees, and administrators associated with six family medicine practices across Canada (representing geographical, population, and practice arrangement diversity), and gave way to a description of the factors that influence how family physicians operationalize the certificates and/or work with those who hold them. These included the prevailing community need, formal privileging and practice requirements, remuneration structure, community culture and practice norms, and individual aspirations. Notably, our findings also identified that CAC holders interact with other practitioners via one of four collaborative models, each of which brings distinct benefits to comprehensive care: an enhanced scope of services model, a shared-care model, a family physician-aligned transfer of care model, and a specialist-aligned transfer of care model. Upon completion of the initial multiple case study, we engaged in focused analyses of the data in order to develop descriptions of what motivates family physicians and family medicine residents to pursue a CAC and unique experiences within certain disciplines including Sports and Exercise Medicine, Emergency Medicine, Palliative Care and Care of the Elderly. This work highlighted that the individual’s perceptions of community need and their desire to build a practice scope that matches their personal and professional preferences intersect to promote (or dissuade) pursuit of the credential. Family physicians and family medicine trainees pursue the credential to meet community healthcare needs, limit or stimulate diversity in practice, secure perceived professional benefits, and/or validate their sense of expertise. Secondary analysis also highlighted that family physicians face barriers to engaging in enhanced skill training once their practice is established. While we believe that the original and secondary analyses of our multiple case study data approach were adequately powered and resulted in conceptual propositions that are widely relevant, we acknowledge that our understanding of the impacts, motivations, and barriers associated with the CAC credentials introduced in 2015 could be strengthened through the collection of data pertaining to a wider sample of family physicians. Accordingly, we undertook a broad survey of family physicians in Canada. The purpose of this survey was to validate the descriptions of impact and motivation generated via our qualitative case study from a larger sample of Canadian family physicians. Further to these aims, the survey was also developed to elicit data that improves our understanding of the degree to which the perceptions about the practices of family physicians with and without the CAC that were yielded in our qualitative work are consistent with the experiences of family physicians across the country. Survey development and design Guided by the seven-step process to questionnaire development for educational research, we constructed our survey based on the outcomes of our previous work. , The survey was created in collaboration with family physicians with and without CACs, members of the CFPC Academic Family Medicine unit, and medical education experts who validated, prima facie , that the items were clearly expressed, meaningful for the intended population, relevant to credentialing education and policy, and likely to be interpreted in a manner consistent with the thematic descriptions generated from the previous work. The survey was comprised of four distinct sections that posed questions about features of personal and professional identity, practice type and location, and training experiences. The survey also queried respondents with respect to propositions on their general perceptions of the impacts of the CAC program, their perceptions about the specific ways in which CAC holders organize their activities collaboratively with other physicians. Respondents who identified as CAC holders were also asked about the outcomes they have experienced as a consequence of obtaining the certificate. We formatted survey items for either fixed-choice, multiple-choice, free text, and 7-point Likert scale (1 = strongly disagree, 3 = disagree, 5 = agree, and 7 = strongly agree) responses. We used LimeSurvey, an open-source online survey web application (Limesurvey GmbH, Germany), to build the survey and collect responses. The full survey (EN and FR) is available as . Participants We circulated the survey in both English and French from November 2019 to January 2020. The CFPC facilitated survey administration to active family physicians across Canada who had agreed to be contacted by the College for research purposes ( N = 23,916; 20,719 English-speaking and 3,197 French-speaking) with the goal of gathering as many responses as possible. At the time of survey circulation, 346 certificates had been awarded in COE, 544 in PC, 385 in FPA, 322 in SEM, 3,643 in EM, and 268 in AM. It was not possible to determine the number of CAC holders in the population of 23,916 family physicians contacted to complete the survey. Eligible participants were sent a reminder to complete the survey in December 2019. The survey was not distributed to CFPC members who were not practicing independent family physicians (e.g., residents, researchers, nurses). Ethics Ethics approval was obtained through the Hamilton Integrated Research Ethics Board (#5151) and participants provided informed consent prior to responding to the survey. Data analysis Descriptive statistics in the form of percentages, proportions, or frequency counts were generated to present the survey’s findings. Specifically, the questions of certificate impact posed only to CAC holders were addressed by way of fixed choice “yes” or “no” answers. Responses garnered from family physicians with and without CACs to questions about general perceptions of the CAC program and the collaborative organization of CAC holders are presented as means (and standard deviations). For the most part, survey questions elicited responses regarding perceptions of the 2015 suite of CACs (PC, FPA, COE, SEM); however, where questions prompted reflection on CAC holders’ experiences, the results provide a presentation of findings pertaining to all respondents including other certificate holders, ES physicians, and generalist physicians. Given our objective was to affirm the descriptions of impact and motivation associated with the CAC program elicited from our multiple case study, the methods (and, in turn, results) presented here are descriptive in a manner aligned with our previous work. Those interested in inferential between-group comparisons that attend to other potential research questions associated with the resulting survey data may refer to the commissioned report presented to the CFPC in March 2020 entitled “ Understanding the Impact of the CFPC Certificates of Added Competence .” This publicly available document includes several appendices that present analyses of relationships between CAC designations and responses pertaining to practice features and perceptions of impact. Guided by the seven-step process to questionnaire development for educational research, we constructed our survey based on the outcomes of our previous work. , The survey was created in collaboration with family physicians with and without CACs, members of the CFPC Academic Family Medicine unit, and medical education experts who validated, prima facie , that the items were clearly expressed, meaningful for the intended population, relevant to credentialing education and policy, and likely to be interpreted in a manner consistent with the thematic descriptions generated from the previous work. The survey was comprised of four distinct sections that posed questions about features of personal and professional identity, practice type and location, and training experiences. The survey also queried respondents with respect to propositions on their general perceptions of the impacts of the CAC program, their perceptions about the specific ways in which CAC holders organize their activities collaboratively with other physicians. Respondents who identified as CAC holders were also asked about the outcomes they have experienced as a consequence of obtaining the certificate. We formatted survey items for either fixed-choice, multiple-choice, free text, and 7-point Likert scale (1 = strongly disagree, 3 = disagree, 5 = agree, and 7 = strongly agree) responses. We used LimeSurvey, an open-source online survey web application (Limesurvey GmbH, Germany), to build the survey and collect responses. The full survey (EN and FR) is available as . We circulated the survey in both English and French from November 2019 to January 2020. The CFPC facilitated survey administration to active family physicians across Canada who had agreed to be contacted by the College for research purposes ( N = 23,916; 20,719 English-speaking and 3,197 French-speaking) with the goal of gathering as many responses as possible. At the time of survey circulation, 346 certificates had been awarded in COE, 544 in PC, 385 in FPA, 322 in SEM, 3,643 in EM, and 268 in AM. It was not possible to determine the number of CAC holders in the population of 23,916 family physicians contacted to complete the survey. Eligible participants were sent a reminder to complete the survey in December 2019. The survey was not distributed to CFPC members who were not practicing independent family physicians (e.g., residents, researchers, nurses). Ethics approval was obtained through the Hamilton Integrated Research Ethics Board (#5151) and participants provided informed consent prior to responding to the survey. Descriptive statistics in the form of percentages, proportions, or frequency counts were generated to present the survey’s findings. Specifically, the questions of certificate impact posed only to CAC holders were addressed by way of fixed choice “yes” or “no” answers. Responses garnered from family physicians with and without CACs to questions about general perceptions of the CAC program and the collaborative organization of CAC holders are presented as means (and standard deviations). For the most part, survey questions elicited responses regarding perceptions of the 2015 suite of CACs (PC, FPA, COE, SEM); however, where questions prompted reflection on CAC holders’ experiences, the results provide a presentation of findings pertaining to all respondents including other certificate holders, ES physicians, and generalist physicians. Given our objective was to affirm the descriptions of impact and motivation associated with the CAC program elicited from our multiple case study, the methods (and, in turn, results) presented here are descriptive in a manner aligned with our previous work. Those interested in inferential between-group comparisons that attend to other potential research questions associated with the resulting survey data may refer to the commissioned report presented to the CFPC in March 2020 entitled “ Understanding the Impact of the CFPC Certificates of Added Competence .” This publicly available document includes several appendices that present analyses of relationships between CAC designations and responses pertaining to practice features and perceptions of impact. Participants A total of 1,525 individuals completed the survey, indicating an overall response rate of 6.38%. With respect to practitioner type, 647 were general family physicians, 278 were enhanced skill physicians without a CAC and 600 had a CAC. Amongst the CAC holders, 560 had one CAC, 37 had two CACs and three had three CACs. Therefore, 643 certificates were represented by these 600 participants. The survey was structured so that the unit of analysis was each individual member of the CFPC, and not awarded certificates. Accordingly, it was necessary to label each participant as a particular type of physician: a CAC holder in a particular domain, an ES family physician, or a generalist family physician. This process required us to make decisions about the label assigned to those individuals with multiple certificates. Considerable heterogeneity in the combination of multiple certificates made it difficult to classify a meaningful group of multiple CAC holders. As such, for those physicians with two or more CACs, we simply assigned the label associated with the certificate relevant to our original study (PC, COE, FPA, or SEM). In cases where the physician held multiple CACs of interest to the original study, we assigned the label associated with the CAC they listed first in the survey. This coding process reduced the number of CACs reflected in the survey data from 643 to 600 – equivalent to the number of respondents - with most of the removed certificates being associated with the Emergency Medicine ( n = 28) and Addictions Medicine ( n = 6) designations. This sample population of certificate holders was representative of 11.7% of all certificates awarded in Canada. Of the total number of respondents, 757 (49.6%) were women, 731 (47.9%) were men, 2 (0.1%) identified as non-binary, and 35 (2.2%) preferred not to report gender . There were 1,219 (79.9%) Canadian Medical Graduates (CMG), 289 (18.95%) International Medical Graduates (IMG), and 17 (1.1%) did not identify themselves as either in our sample. The average age of participants was 48.9 (±12.1) years and the average number of years in practice was 17.0 (± 11.9) years. There were 1,401 (91.9%) respondents that identified English as their primary language, and 124 (8.1%) that identified French. With respect to CACs held, 106 CAC holders were in PC (19.5% of all PC certificate holders), 66 COE (20.2%), 77 SEM (24.5%), and 76 had FPA (20.0%) certificates. The remaining 309 certificates indicated by respondents were either in the domains of Emergency Medicine (267; 7.3%) or Addictions Medicine (42; 15.7%). Among the CAC respondents, 274 indicated that their certificate was required for privileging. Perceptions of care delivery CAC holders in each domain are perceived to work in different types of care models in collaboration with other physicians in the community. Physicians with certificates in PC, COE, and SEM are perceived to most often work in a shared-care model described in the survey as occurring when “ the enhanced skilled physician works with the referring family physician ” and does not act as the most responsible physician . Respondents generally agree that FPA-CACs work in family medicine-aligned transfer-of-care model where the CAC holder “ takes over the care of the patient from the referring family physician .” These CAC holders are also less frequently perceived as working in a shared care model. Impact of CAC on professional satisfaction and wellbeing We queried those respondents who indicated having a certificate about outcomes they experienced as a result of acquiring a CAC. These outcomes pertained to ideas of preferrable practice arrangements including fulfilling practice scopes and improved remuneration structures ( , Section 2, Question 21). Of all CAC respondents, 44.0% indicated they experienced greater enjoyment in their practice due to having more expertise and operating within a smaller scope . Notably, this experience was realized in a greater proportion of COE-CAC holders (54.5%) than other certificate holders. Similarly, 46.0% of all CAC holders indicated experiencing increased satisfaction from practice due to spending more time in areas of practice that they found more interesting. This experience was also reported by a greater proportion of COE (54.0%) CAC holders than other CAC holders. Less than half (40.8%) of all CAC holder respondents reported experiencing greater enjoyment from practice due to having high acuity in patient presentations in their practice, and only 22% indicated that the certificate supported a practice characterized by less clinical uncertainty. With respect to professional arrangements, only 17.8% of all CAC holders reported experiencing higher remuneration and 7.7% reported being remunerated via a salaried fee structure. Obtaining a CAC afforded respondents professional development experiences outside of the clinical setting. For example, 48.5% of the COE-CAC holders reported entering new professional roles (e.g., academic, administrative, health system leadership) related to their CAC (See ). Perceptions of the CAC program and their holders We asked generalist and CAC respondents to reflect on their personal experiences with family practice and indicate their level of agreement with statements regarding the CAC program and its certificate holders. When asked about their perception of CAC holders, participants indicated slight agreement with the statement that CAC holders are fundamentally different compared to generalist family physicians (4.5 ± 1.6 on a 7-point Likert scale (1 = strongly disagree; 7 = strongly agree)) and specialists (4.7 ± 1.6) . Despite this, participants agreed with the statements that CAC holders take a family medicine approach in their specialized area of healthcare delivery (5.4 ± 1.5) and that prior experience in a comprehensive family practice improves the care provided by those with a CAC (5.4 ± 1.5). Regarding practice scopes, respondents agreed with the proposition that CAC holders address specific community needs (5.6 ± 1.3) . There was also agreement that CAC holders may have taken over some of the scope of practice that is provided by comprehensive family practices (4.5 ± 1.5). However, respondents expressed more strongly that family physicians with CACs enhance the capacity of generalist family physicians when providing comprehensive care within a community (5.3 ± 1.5). Realization of some benefits and supports were reported when working with a CAC holder including allowing generalists to spend more time on other aspects of patient care and helping maintain patient continuity with primary care physicians. Most notably, the responses suggest that CAC holders are perceived to be an important source of support for facilitating healthcare delivery for rural and remote patients. The data suggest that this is because CAC physicians take on referrals that would otherwise require patients to travel to a specialist outside the community. There was particularly strong agreement amongst FPA-CAC respondents regarding this sentiment (5.5 ± 1.4). Survey responses did highlight some concerns regarding the certificates. Specifically, generalist family physicians reported strong levels of agreement with the idea that the certificates devalue the expertise of family physicians who do not hold one . Generalist family physicians had similarly high levels of agreement concerning the statement: “ The CAC will inflate the minimum credentials required to practice family medicine ” . Respondents were equivocal about the idea that family physicians who pursue a CAC are motivated by a need to address community needs that grow out of experience in comprehensive family medicine (4.1 ± 1.6). There was a high level of agreement amongst all respondents that physicians choose to pursue a certificate because of personal and/or professional reasons (e.g., lifestyle, remuneration, interest) (5.3 ± 1.4) . A total of 1,525 individuals completed the survey, indicating an overall response rate of 6.38%. With respect to practitioner type, 647 were general family physicians, 278 were enhanced skill physicians without a CAC and 600 had a CAC. Amongst the CAC holders, 560 had one CAC, 37 had two CACs and three had three CACs. Therefore, 643 certificates were represented by these 600 participants. The survey was structured so that the unit of analysis was each individual member of the CFPC, and not awarded certificates. Accordingly, it was necessary to label each participant as a particular type of physician: a CAC holder in a particular domain, an ES family physician, or a generalist family physician. This process required us to make decisions about the label assigned to those individuals with multiple certificates. Considerable heterogeneity in the combination of multiple certificates made it difficult to classify a meaningful group of multiple CAC holders. As such, for those physicians with two or more CACs, we simply assigned the label associated with the certificate relevant to our original study (PC, COE, FPA, or SEM). In cases where the physician held multiple CACs of interest to the original study, we assigned the label associated with the CAC they listed first in the survey. This coding process reduced the number of CACs reflected in the survey data from 643 to 600 – equivalent to the number of respondents - with most of the removed certificates being associated with the Emergency Medicine ( n = 28) and Addictions Medicine ( n = 6) designations. This sample population of certificate holders was representative of 11.7% of all certificates awarded in Canada. Of the total number of respondents, 757 (49.6%) were women, 731 (47.9%) were men, 2 (0.1%) identified as non-binary, and 35 (2.2%) preferred not to report gender . There were 1,219 (79.9%) Canadian Medical Graduates (CMG), 289 (18.95%) International Medical Graduates (IMG), and 17 (1.1%) did not identify themselves as either in our sample. The average age of participants was 48.9 (±12.1) years and the average number of years in practice was 17.0 (± 11.9) years. There were 1,401 (91.9%) respondents that identified English as their primary language, and 124 (8.1%) that identified French. With respect to CACs held, 106 CAC holders were in PC (19.5% of all PC certificate holders), 66 COE (20.2%), 77 SEM (24.5%), and 76 had FPA (20.0%) certificates. The remaining 309 certificates indicated by respondents were either in the domains of Emergency Medicine (267; 7.3%) or Addictions Medicine (42; 15.7%). Among the CAC respondents, 274 indicated that their certificate was required for privileging. CAC holders in each domain are perceived to work in different types of care models in collaboration with other physicians in the community. Physicians with certificates in PC, COE, and SEM are perceived to most often work in a shared-care model described in the survey as occurring when “ the enhanced skilled physician works with the referring family physician ” and does not act as the most responsible physician . Respondents generally agree that FPA-CACs work in family medicine-aligned transfer-of-care model where the CAC holder “ takes over the care of the patient from the referring family physician .” These CAC holders are also less frequently perceived as working in a shared care model. Impact of CAC on professional satisfaction and wellbeing We queried those respondents who indicated having a certificate about outcomes they experienced as a result of acquiring a CAC. These outcomes pertained to ideas of preferrable practice arrangements including fulfilling practice scopes and improved remuneration structures ( , Section 2, Question 21). Of all CAC respondents, 44.0% indicated they experienced greater enjoyment in their practice due to having more expertise and operating within a smaller scope . Notably, this experience was realized in a greater proportion of COE-CAC holders (54.5%) than other certificate holders. Similarly, 46.0% of all CAC holders indicated experiencing increased satisfaction from practice due to spending more time in areas of practice that they found more interesting. This experience was also reported by a greater proportion of COE (54.0%) CAC holders than other CAC holders. Less than half (40.8%) of all CAC holder respondents reported experiencing greater enjoyment from practice due to having high acuity in patient presentations in their practice, and only 22% indicated that the certificate supported a practice characterized by less clinical uncertainty. With respect to professional arrangements, only 17.8% of all CAC holders reported experiencing higher remuneration and 7.7% reported being remunerated via a salaried fee structure. Obtaining a CAC afforded respondents professional development experiences outside of the clinical setting. For example, 48.5% of the COE-CAC holders reported entering new professional roles (e.g., academic, administrative, health system leadership) related to their CAC (See ). We asked generalist and CAC respondents to reflect on their personal experiences with family practice and indicate their level of agreement with statements regarding the CAC program and its certificate holders. When asked about their perception of CAC holders, participants indicated slight agreement with the statement that CAC holders are fundamentally different compared to generalist family physicians (4.5 ± 1.6 on a 7-point Likert scale (1 = strongly disagree; 7 = strongly agree)) and specialists (4.7 ± 1.6) . Despite this, participants agreed with the statements that CAC holders take a family medicine approach in their specialized area of healthcare delivery (5.4 ± 1.5) and that prior experience in a comprehensive family practice improves the care provided by those with a CAC (5.4 ± 1.5). Regarding practice scopes, respondents agreed with the proposition that CAC holders address specific community needs (5.6 ± 1.3) . There was also agreement that CAC holders may have taken over some of the scope of practice that is provided by comprehensive family practices (4.5 ± 1.5). However, respondents expressed more strongly that family physicians with CACs enhance the capacity of generalist family physicians when providing comprehensive care within a community (5.3 ± 1.5). Realization of some benefits and supports were reported when working with a CAC holder including allowing generalists to spend more time on other aspects of patient care and helping maintain patient continuity with primary care physicians. Most notably, the responses suggest that CAC holders are perceived to be an important source of support for facilitating healthcare delivery for rural and remote patients. The data suggest that this is because CAC physicians take on referrals that would otherwise require patients to travel to a specialist outside the community. There was particularly strong agreement amongst FPA-CAC respondents regarding this sentiment (5.5 ± 1.4). Survey responses did highlight some concerns regarding the certificates. Specifically, generalist family physicians reported strong levels of agreement with the idea that the certificates devalue the expertise of family physicians who do not hold one . Generalist family physicians had similarly high levels of agreement concerning the statement: “ The CAC will inflate the minimum credentials required to practice family medicine ” . Respondents were equivocal about the idea that family physicians who pursue a CAC are motivated by a need to address community needs that grow out of experience in comprehensive family medicine (4.1 ± 1.6). There was a high level of agreement amongst all respondents that physicians choose to pursue a certificate because of personal and/or professional reasons (e.g., lifestyle, remuneration, interest) (5.3 ± 1.4) . This study aimed to describe the perceptions of family physicians in Canada regarding the influence of the College’s CAC program on comprehensive care delivery. While our previous work generated rich and relevant descriptions and propositions concerning the CAC program, , - this pan-Canadian survey allowed us to further affirm our understanding of the perceptions held by family physicians with and without CACs across the country. Our findings illustrate that family physicians with CACs operationalize their practice in various ways with respect to the degree to which they maintain comprehensive family medicine practices, and the geographic location and distance at which they situate their practices as a function of their CAC domain. This variance in practice organization highlights that the CAC holders have different degrees of opportunity to arrange their practices in a way that addresses diverse community needs. For instance, our FPA-CAC respondents reported having rural practices to a greater degree than other CAC physicians, reflecting the greater need that rural areas have for family physician-led anesthesia services. From this survey, we identified several benefits of the CAC program, which echo our previous multiple case study. Respondents reported working across various forms of collaborative care in the community, with most indicating a shared care model. In working within this arrangement, CAC holders are able to provide expertise while also allowing the referring physician to preserve a continuous therapeutic relationship with patients. In this regard, the enhanced skill allows these family physicians to act as a resource that enhances access to care within communities. For example, our survey participants indicated that those with the certificate can alleviate the travel burden associated with accessing specialist services not usually available in rural and remote communities. While our study’s CAC holders generally reported working in shared-care models, previous reports have described certain CAC holders such as SEM or FPA-CAC family physicians working in specialist-aligned transfer of care models. Working in this model also comes with its own benefits as it can help reduce patient wait times due to the formal relationship between the CAC holder and the specialist to whom the referring family physician transferred the care. Notably, CAC holders are perceived by their colleagues as fundamentally different than generalists and specialists. Indeed, despite taking a family medicine approach to care delivery in their respective clinical domains, many CAC-holder respondents in our study reported not maintaining comprehensive family practices. Some reported organizing their practices in a way that aligned with personal and professional interests rather than community needs – a finding that resonates with our previous CAC-specific work. , , This decision was seemingly associated with greater job satisfaction related to operating within smaller and more manageable practice scopes that afford improved work-life balance. From this perspective, the College should be careful about the way enhanced skill training is conceptualized as part of its current curriculum expansion project, which is specifically designed to ensure that graduating physicians are prepared to deliver comprehensive community-based care aligned with generalist principles. While the expansion will organize residency training over a longer period, this additional time should likely remain focused on educational interventions that promote greater competence and confidence with those services that make up the core of comprehensive family medicine. An approach that focuses extra time on enhanced skills training may be less effective in promoting an orientation to comprehensive practice. There are several limitations in this study. First, we received a low volume of responses relative to the number of family physicians practicing across the country. Our approach to sampling was, of course, constrained to those College members who had agreed to be contacted for research purposes. As such, we are unable to determine the proportion of potential respondents that have a CAC. However, the response rate amongst our sample population of CAC holders is high (19.5%-24.5%). Within this variable interpretation of our sample size relative to population, we must temper our perceptions of the explanatory power of the responses. In this regard, there may be self-selection bias that yielded a larger proportion of CAC holders in our sample than may have been expected. Secondly, as is inherent to survey research, we acknowledge that respondents may be subject to recall bias. Lastly, respondents that held multiple CACs of interest were assigned a label according to the certificate they listed first. Given the nature of the study, we were not able to gain an understanding of the nuanced perspective of multi-CAC holders. Many CAC holders leverage their expertise and knowledge to arrange their practices in diverse ways to meet the health needs of communities. There are many benefits to engaging in collaborative practices and reducing the barriers to accessing healthcare for underserved communities. However, this is not the case for all CAC holders. In this regard, unintended consequences associated with the certificate program were also noted. While aligning practice arrangements to personal and professional interests may be positive for CAC holders, this can present risks to the delivery of comprehensive, continuous family medicine care organized around community need. As such, it is essential that we continue to promote practice arrangements that are grounded in the principles of family medicine. With this in mind, we encourage increased investment in health system improvements for generalist family medicine, which incentivize community-adaptive, comprehensive, continuous family medicine practice.
Pathogen Inactivating Properties and Increased Sensitivity in Molecular Diagnostics by PAXgene, a Novel Non-Crosslinking Tissue Fixative
64fc1132-1b3e-4266-ad1b-09ce5f3334ee
4790970
Pathology[mh]
For decades buffered formaldehyde solution (formalin) has been the gold standard for tissue preservation in histopathological diagnostics . Furthermore formalin is used for pathogen inactivation in vaccine production , and as an active component in disinfectants underlining its favourable properties as a pathogen inactivating chemical. The development of nucleic acid-based molecular diagnostics has revealed several drawbacks of formalin fixation in molecular diagnostics, particularly in the context of personalized medicine. Formalin fixation leads to crosslinks between proteins and nucleic acids as well as to fragmentation which adversely affects molecular analytical methods. Furthermore sequence artefacts arising from damaged DNA templates increase the risk of false-positive and false-negative calls in the diagnostic context . Therefore the use of fresh or cryopreserved bio-samples is currently the preferred approach for optimal performance in molecular analyses. Conversely, collecting cryopreserved samples in routine health care faces several limitations. Due to the limited amount and size of human tissue samples available (e.g. biopsies), tissues cannot be processed in parallel by formalin fixation (required for histopathological diagnosis) and cryopreservation (for molecular diagnosis). Furthermore, cryopreservation cannot be applied as a routine procedure in health care for logistical and financial reasons. As a consequence a variety of alternative tissue preservation methods, such as alcohol-based fixatives UMFix (Sakura Finetek, Torrance, CA) , picrate fixative Bouin´s solution (Newcomer Supply, Middleton, WI) , HOPE and RNAlater were developed and tested as to whether they fulfil the required features of optimal preservation of tissue morphology and nucleic acids. Recently, the PAXgene Tissue System (PAXgene) (PreAnalytiX, Hombrechtikon, Switzerland) was developed by using a high throughput screening approach to find the best formulation for combined preservation of morphology and biomolecules . PAXgene is a commercially available non-crosslinking fixative comprising a fixation (PAXgene Fix) and stabilization solution (PAXgene Stab) based on a mixture of different alcohols, acetic acid and a soluble organic compound . Histological assessment of PAXgene-fixed paraffin-embedded (PFPE) tissues showed that morphological features were preserved comparable to formalin-fixed paraffin-embedded (FFPE) tissues . Importantly, the preservation of nucleic acids in PFPE-tissues was shown to be of similar high quality as in fresh frozen samples . Furthermore, proteomic analyses, such as Western blot and reverse phase protein arrays showed that detection of different proteins, including phosphor-proteins from human PFPE-samples was comparable to cryopreserved tissue . The evidence of well-preserved nucleic acids and proteins now raises the question whether PAXgene fixation results in proper inactivation of pathogens, or if other biosafety requirements have to be established for clinical personnel handling infectious human samples as is currently the case for formalin. Data obtained from immunocytochemistry assays showed that PAXgene inactivates influenza A virus, adenovirus and human cytomegalovirus (CMV) at least as well as formalin . However, information on further microbiological species is lacking. Hence there is a major demand for analysis of the pathogen disabling properties of PAXgene compared to formalin. Therefore we tested the inactivation of 6 bacterial and 22 fungal strains. In addition we analysed the impact of fixation on CMV detection since it is highly seroprevalent , responsible for the most frequent complications after organ transplantations and is the most important tissue-related viral indication for initiating pre-emptive therapy in organ transplant recipients . Because of lack of specific guidelines for biosafety assessment of tissue fixatives, we followed the guidelines developed for accreditation of disinfectants (DGHM, German Society of Hygiene and Microbiology) , the requirements for validation of sterilization procedures for bone transplants and CEN (European Committee for Standardisation) CT 216 EN 14485 for selecting test organisms for in vitro and cell culture assays. In all assays we compared PAXgene with formalin fixation for which long-term practical experience exists on biosafety risks, although this is rarely documented in the literature . Bacteria inactivation assays To assess the inactivating property of PAXgene, reduction of colony forming units per millilitre (cfu/mL) was determined after fixation of different bacterial strains with PAXgene compared to formalin (4% formaldehyde buffered to pH 7.0). Phosphate-buffered saline (PBS)-treated bacteria served as reference. Reduction of bacterial growth of 10 5 was considered as inactivated as this is requested for disinfectants and recommended for medical devices, blood products and bone transplants . Clostridium sporogenes (Cs) , Staphylococcus aureus (Sa) , Bacillus subtilis (Bs) , Pseudomonas aeruginosa (Pa) , Mycobacterium smegmatis (Ms) and Mycobacterium terrae (Mt) were obtained from ATCC or DSMZ (German Collection of Microorganisms and Cell Cultures) . Overnight cultures (ONCs) for Sa , Bs and Pa were prepared in appropriate liquid media with inoculation of a single colony and incubated overnight at 37°C and 200 rpm for aerobic conditions. For anaerobic conditions for cultivation of Cs ONCs were hermetically sealed and incubated at 37°C without shaking. After overnight cultivation 1 mL of the bacterial suspension was filled into 4 tubes per strain, centrifuged for 20 minutes at 1,200 x g and the supernatant was discarded. Mycobacteria strains were washed with PBS from a confluent cell layer of an agar plate. Due to extensive clotting of Mt cells were dissociated with the GentleMACS Dissociator (Miltenyi, Bergisch-Gladbach, Germany). One mL of each Mycobacteria suspension was filled into 4 tubes and centrifuged at 10,000 x g for 5 minutes. Each pellet was resuspended in 1 mL of the respective inactivation solution . Since the PAXgene procedure comprises two steps (i.e. PAXgene Fix followed by PAXgene Stab), both steps were tested separately and in combination. After 30 min incubation with fixatives or PBS at room temperature cells were centrifuged to pellets. One of two PAXgene fixed samples was resuspended in PBS, the second sample was incubated with 1 mL PAXgene Stab for another 30 min, centrifuged and resuspended in PBS. Dilution series of 1:10 were prepared and 100 μL were plated on the adequate agar medium . Bs , Sa and Pa were cultivated under aerobic conditions at 37°C for 24–48 hours. Cs was anaerobically cultivated using the GENbag anaer system (bioMérieux, Marcy L’Etoile, France) at 37°C for 24–48 hours. Ms and Mt were incubated at 37°C for four and fifteen days, respectively. Six independent series of assays were performed with Ba , Sa , Ms and Mt , seven with Cs and four with Pa . Sample processing and paraffin embedding We investigated whether the paraffin embedding process following fixation in the context of histopathological analysis of tissue samples further inactivated fixation resistant bacteria. A standard embedding process comprised four ascending ethanol steps from 70% to 99% for four hours, followed by two hours isopropanol (Sigma-Aldrich, Steinheim, Germany), two hours xylene (J.T. Baker, Deventer, Netherlands) and three hours molten paraffin (ACM Herba-Chemosan, Vienna, Austria) at 55°C. ONCs of Cs were made as described above, incubated with 1 mL 70% ethanol for 30 min, centrifuged, resulting cell pellets were resuspended in 100 μL PBS, plated on appropriate agar plates and cultivated as described above. Since 70% alcohol fully inactivated Cs (no growth of colonies at any dilution) no further embedding steps were investigated. Fungi inactivation assay Fungal strains (yeast and mould fungi) were obtained from culture collections ATCC, DSMZ, CBS (Fungal Biodiversity Centre) and patients’ isolates from the Biobank of the Medical University of Graz . Cultivation and inactivation assays were performed according to the DGHM guidelines for disinfectants with modifications as follows. Four samples of yeast cells and spore suspensions respectively, with a turbidity equivalent to a McFarland 4 standard were prepared and centrifuged to cell pellets as described above for bacteria . Incubation with PAXgene and formalin was performed for 2 hours due to the higher resistance of spores. One hundred microliters of each of the fixed samples and dilution series of PBS control samples (mean 10 −12 ) were plated onto Sabouraud agar plates (Oxoid, Basingstoke, UK) and incubated at 30°C for 48 hours. Two independent series of experiments were performed for each fungus strain. CMV inactivation assay MRC-5 cells (human lung fibroblast cells, LGC Promochem, Germany, ATCC #CCL-171) were cultivated in 182.5 cm 2 cell culture flasks (VWR, Vienna, Austria) with Minimum Essential Medium supplemented with GlutaMax (Gibco, Life Technologies, UK), 10% fetal calf serum (Gibco) and 1% Penstrep (Gibco) at 37°C and 5% CO 2 until 60–70% confluency. Infection was performed with 2 mL suspension of human cytomegalovirus AD 169 (HPA #622, former Health Protection Agency, actually Public Health England, UK) containing 900 plaque forming units/mL per flask except negative control. Cells were cultured until massive cytopathic effects (CPEs) were observed (typically 10–14 days after infection). Cells were harvested using 0.05% Trypsin-EDTA (Gibco), centrifuged and resulting cell pellets were washed with PBS. Pellets were resuspended and distributed to 8 reaction tubes (1.5 mL). Two tubes each were incubated either with PBS (CMV-positive control), PAXgene Fix or formalin as described above for one hour. After removing PAXgene Fix by centrifugation, cells were stabilized for one hour with PAXgene Stab. One set of samples was centrifuged and cell pellets were washed with PBS, injected into 500 μL liquid 5% low melt agarose (Carl Roth GmbH, Karlsruhe, Germany) in a 1.5 mL reaction tube and immediately cooled on ice. Resulting agarose plugs were placed in tissue cassettes and processed in an automated tissue processor (Tissue Tek VIP, Miles Scientific, Sanova, Vienna, Austria). The second set of samples was not processed and paraffin embedded. All samples (paraffin-embedded and not paraffin-embedded) were dissociated in 5 mL MEM using a GentleMacs Dissociator (Miltenyi, Bergisch Gladbach, Germany). Floating paraffin was removed and cell lysates were applied to new MRC-5 monolayers grown in 75 cm 2 cell culture flasks to detect viable virus. Cultivation was performed until CPEs appeared in PBS-treated cells (positive control). Cells were harvested on day 19 after infection with all lysates. To further investigate CMV viability on the basis of viral transcripts, RNA was isolated from all samples using AllPrep DNA/RNA/Protein Mini Kit (Qiagen). Quantification of these and all following extractions was performed on a NanoDrop 100 Spectrophotometer (PeqLab, Erlangen, Germany). Reverse transcription including DNAse-I-digestion was performed using QuantiTect Reverse Transcription Kit (Qiagen). Primers for immediate-early CMV gene TRS1 (terminal right short 1) (NCBI Reference Sequence: NC_006273.2) were designed and blasted using NCBI primer design tool ( www.ncbi.nlm.nih.gov/tools/primer-blast ). Forward primer: acacagatggaacaaaagcaga ; reverse primer: acgctgtggtttggagattga , amplicon (170 bp, Eurofins MWG Operon, Ebersberg, Germany). RT-qPCR was performed on Applied Biosystems 7900HT Fast Real Time PCR System (Applied Biosystems, Foster City, USA) using a TaqMan-specific set of PCR reagents following the manufacturer’s instructions. Glyceraldehyde-3-phosphate-dehydrogenase (GAPDH) was used as a reference gene. Forward primer: ccacatcgctcagacaccat , reverse primer: gtaaaccatgtagttgaggtc , amplicon (153 bp, Eurofins). Immunocytochemistry assays employing monoclonal mouse anti-CMV (M085401, Dako) were performed as described previously confirming RT-qPCR results. Reverse transcription real-time PCR sensitivity assay To investigate whether PAXgene fixation results in better sensitivity of PCR-based assays as compared to formalin fixation, MRC-5 cells were infected with CMV, harvested seven days post infection, centrifuged to obtain cell pellets and fixed either with PAXgene, formalin or PBS (as CMV-positive control) as described above. Three independent series with triplicate samples were performed. RNA of formalin fixed cells was isolated with RNeasy FFPE kit (Qiagen) without applying the deparaffination step at the beginning. RNA of PAXgene fixed cells was isolated using the PAXgene tissue RNA Kit (PreAnalytiX). For RNA isolation of PBS treated (CMV positive control) and not infected CMV-negative cells RNeasy Mini Kit (Qiagen) was used following manufacturer´s instructions. RNA quality for fixed samples was checked as previously reported . To exclude that the observed differences in PCR sensitivity were due to different RNA isolation methods, RNA from an additional set of samples was isolated using the AllPrep DNA/RNA/Protein Mini Kit (Qiagen) for all fixation types. Reverse transcription was performed as described above. One hundred nanograms cDNA per tube were used in duplicates and three biological samples for RT-qPCR on a Rotor Gene Q 6000 Cycler (Qiagen) employing Rotor Gene SYBR Green PCR Kit (Qiagen). Quantitative real-time PCR sensitivity assay MRC-5 cells were cultivated, infected with CMV, harvested two days after infection, pelleted, fixed either with PAXgene, formalin or treated with PBS (for control) as described above. Cell pellets were washed with PBS and homogenized with the GentleMacs Dissociator (Miltenyi) to release virus particles from the cells, viral DNA was isolated using the QIAamp MinElute Virus Spin Kit (Qiagen) and 20μl of template-DNA were used for detection of CMV performed with the IVD-approved artus CMV RG PCR kit CE (Qiagen) in duplicates according to manufacturer´s instructions. Statistical analysis of PCR data RT-qPCR and qPCR data on sensitivity (delivered from Rotor Gene Q Series Software 2.0.2, dynamic tube normalization) were analysed with IBM SPSS Statistics 22, Kolmogorov-Smirnov-Tests of Normality with Lillefors Significance Correction and T-test for paired samples (α = 0.05). To assess the inactivating property of PAXgene, reduction of colony forming units per millilitre (cfu/mL) was determined after fixation of different bacterial strains with PAXgene compared to formalin (4% formaldehyde buffered to pH 7.0). Phosphate-buffered saline (PBS)-treated bacteria served as reference. Reduction of bacterial growth of 10 5 was considered as inactivated as this is requested for disinfectants and recommended for medical devices, blood products and bone transplants . Clostridium sporogenes (Cs) , Staphylococcus aureus (Sa) , Bacillus subtilis (Bs) , Pseudomonas aeruginosa (Pa) , Mycobacterium smegmatis (Ms) and Mycobacterium terrae (Mt) were obtained from ATCC or DSMZ (German Collection of Microorganisms and Cell Cultures) . Overnight cultures (ONCs) for Sa , Bs and Pa were prepared in appropriate liquid media with inoculation of a single colony and incubated overnight at 37°C and 200 rpm for aerobic conditions. For anaerobic conditions for cultivation of Cs ONCs were hermetically sealed and incubated at 37°C without shaking. After overnight cultivation 1 mL of the bacterial suspension was filled into 4 tubes per strain, centrifuged for 20 minutes at 1,200 x g and the supernatant was discarded. Mycobacteria strains were washed with PBS from a confluent cell layer of an agar plate. Due to extensive clotting of Mt cells were dissociated with the GentleMACS Dissociator (Miltenyi, Bergisch-Gladbach, Germany). One mL of each Mycobacteria suspension was filled into 4 tubes and centrifuged at 10,000 x g for 5 minutes. Each pellet was resuspended in 1 mL of the respective inactivation solution . Since the PAXgene procedure comprises two steps (i.e. PAXgene Fix followed by PAXgene Stab), both steps were tested separately and in combination. After 30 min incubation with fixatives or PBS at room temperature cells were centrifuged to pellets. One of two PAXgene fixed samples was resuspended in PBS, the second sample was incubated with 1 mL PAXgene Stab for another 30 min, centrifuged and resuspended in PBS. Dilution series of 1:10 were prepared and 100 μL were plated on the adequate agar medium . Bs , Sa and Pa were cultivated under aerobic conditions at 37°C for 24–48 hours. Cs was anaerobically cultivated using the GENbag anaer system (bioMérieux, Marcy L’Etoile, France) at 37°C for 24–48 hours. Ms and Mt were incubated at 37°C for four and fifteen days, respectively. Six independent series of assays were performed with Ba , Sa , Ms and Mt , seven with Cs and four with Pa . Sample processing and paraffin embedding We investigated whether the paraffin embedding process following fixation in the context of histopathological analysis of tissue samples further inactivated fixation resistant bacteria. A standard embedding process comprised four ascending ethanol steps from 70% to 99% for four hours, followed by two hours isopropanol (Sigma-Aldrich, Steinheim, Germany), two hours xylene (J.T. Baker, Deventer, Netherlands) and three hours molten paraffin (ACM Herba-Chemosan, Vienna, Austria) at 55°C. ONCs of Cs were made as described above, incubated with 1 mL 70% ethanol for 30 min, centrifuged, resulting cell pellets were resuspended in 100 μL PBS, plated on appropriate agar plates and cultivated as described above. Since 70% alcohol fully inactivated Cs (no growth of colonies at any dilution) no further embedding steps were investigated. We investigated whether the paraffin embedding process following fixation in the context of histopathological analysis of tissue samples further inactivated fixation resistant bacteria. A standard embedding process comprised four ascending ethanol steps from 70% to 99% for four hours, followed by two hours isopropanol (Sigma-Aldrich, Steinheim, Germany), two hours xylene (J.T. Baker, Deventer, Netherlands) and three hours molten paraffin (ACM Herba-Chemosan, Vienna, Austria) at 55°C. ONCs of Cs were made as described above, incubated with 1 mL 70% ethanol for 30 min, centrifuged, resulting cell pellets were resuspended in 100 μL PBS, plated on appropriate agar plates and cultivated as described above. Since 70% alcohol fully inactivated Cs (no growth of colonies at any dilution) no further embedding steps were investigated. Fungal strains (yeast and mould fungi) were obtained from culture collections ATCC, DSMZ, CBS (Fungal Biodiversity Centre) and patients’ isolates from the Biobank of the Medical University of Graz . Cultivation and inactivation assays were performed according to the DGHM guidelines for disinfectants with modifications as follows. Four samples of yeast cells and spore suspensions respectively, with a turbidity equivalent to a McFarland 4 standard were prepared and centrifuged to cell pellets as described above for bacteria . Incubation with PAXgene and formalin was performed for 2 hours due to the higher resistance of spores. One hundred microliters of each of the fixed samples and dilution series of PBS control samples (mean 10 −12 ) were plated onto Sabouraud agar plates (Oxoid, Basingstoke, UK) and incubated at 30°C for 48 hours. Two independent series of experiments were performed for each fungus strain. MRC-5 cells (human lung fibroblast cells, LGC Promochem, Germany, ATCC #CCL-171) were cultivated in 182.5 cm 2 cell culture flasks (VWR, Vienna, Austria) with Minimum Essential Medium supplemented with GlutaMax (Gibco, Life Technologies, UK), 10% fetal calf serum (Gibco) and 1% Penstrep (Gibco) at 37°C and 5% CO 2 until 60–70% confluency. Infection was performed with 2 mL suspension of human cytomegalovirus AD 169 (HPA #622, former Health Protection Agency, actually Public Health England, UK) containing 900 plaque forming units/mL per flask except negative control. Cells were cultured until massive cytopathic effects (CPEs) were observed (typically 10–14 days after infection). Cells were harvested using 0.05% Trypsin-EDTA (Gibco), centrifuged and resulting cell pellets were washed with PBS. Pellets were resuspended and distributed to 8 reaction tubes (1.5 mL). Two tubes each were incubated either with PBS (CMV-positive control), PAXgene Fix or formalin as described above for one hour. After removing PAXgene Fix by centrifugation, cells were stabilized for one hour with PAXgene Stab. One set of samples was centrifuged and cell pellets were washed with PBS, injected into 500 μL liquid 5% low melt agarose (Carl Roth GmbH, Karlsruhe, Germany) in a 1.5 mL reaction tube and immediately cooled on ice. Resulting agarose plugs were placed in tissue cassettes and processed in an automated tissue processor (Tissue Tek VIP, Miles Scientific, Sanova, Vienna, Austria). The second set of samples was not processed and paraffin embedded. All samples (paraffin-embedded and not paraffin-embedded) were dissociated in 5 mL MEM using a GentleMacs Dissociator (Miltenyi, Bergisch Gladbach, Germany). Floating paraffin was removed and cell lysates were applied to new MRC-5 monolayers grown in 75 cm 2 cell culture flasks to detect viable virus. Cultivation was performed until CPEs appeared in PBS-treated cells (positive control). Cells were harvested on day 19 after infection with all lysates. To further investigate CMV viability on the basis of viral transcripts, RNA was isolated from all samples using AllPrep DNA/RNA/Protein Mini Kit (Qiagen). Quantification of these and all following extractions was performed on a NanoDrop 100 Spectrophotometer (PeqLab, Erlangen, Germany). Reverse transcription including DNAse-I-digestion was performed using QuantiTect Reverse Transcription Kit (Qiagen). Primers for immediate-early CMV gene TRS1 (terminal right short 1) (NCBI Reference Sequence: NC_006273.2) were designed and blasted using NCBI primer design tool ( www.ncbi.nlm.nih.gov/tools/primer-blast ). Forward primer: acacagatggaacaaaagcaga ; reverse primer: acgctgtggtttggagattga , amplicon (170 bp, Eurofins MWG Operon, Ebersberg, Germany). RT-qPCR was performed on Applied Biosystems 7900HT Fast Real Time PCR System (Applied Biosystems, Foster City, USA) using a TaqMan-specific set of PCR reagents following the manufacturer’s instructions. Glyceraldehyde-3-phosphate-dehydrogenase (GAPDH) was used as a reference gene. Forward primer: ccacatcgctcagacaccat , reverse primer: gtaaaccatgtagttgaggtc , amplicon (153 bp, Eurofins). Immunocytochemistry assays employing monoclonal mouse anti-CMV (M085401, Dako) were performed as described previously confirming RT-qPCR results. To investigate whether PAXgene fixation results in better sensitivity of PCR-based assays as compared to formalin fixation, MRC-5 cells were infected with CMV, harvested seven days post infection, centrifuged to obtain cell pellets and fixed either with PAXgene, formalin or PBS (as CMV-positive control) as described above. Three independent series with triplicate samples were performed. RNA of formalin fixed cells was isolated with RNeasy FFPE kit (Qiagen) without applying the deparaffination step at the beginning. RNA of PAXgene fixed cells was isolated using the PAXgene tissue RNA Kit (PreAnalytiX). For RNA isolation of PBS treated (CMV positive control) and not infected CMV-negative cells RNeasy Mini Kit (Qiagen) was used following manufacturer´s instructions. RNA quality for fixed samples was checked as previously reported . To exclude that the observed differences in PCR sensitivity were due to different RNA isolation methods, RNA from an additional set of samples was isolated using the AllPrep DNA/RNA/Protein Mini Kit (Qiagen) for all fixation types. Reverse transcription was performed as described above. One hundred nanograms cDNA per tube were used in duplicates and three biological samples for RT-qPCR on a Rotor Gene Q 6000 Cycler (Qiagen) employing Rotor Gene SYBR Green PCR Kit (Qiagen). MRC-5 cells were cultivated, infected with CMV, harvested two days after infection, pelleted, fixed either with PAXgene, formalin or treated with PBS (for control) as described above. Cell pellets were washed with PBS and homogenized with the GentleMacs Dissociator (Miltenyi) to release virus particles from the cells, viral DNA was isolated using the QIAamp MinElute Virus Spin Kit (Qiagen) and 20μl of template-DNA were used for detection of CMV performed with the IVD-approved artus CMV RG PCR kit CE (Qiagen) in duplicates according to manufacturer´s instructions. Statistical analysis of PCR data RT-qPCR and qPCR data on sensitivity (delivered from Rotor Gene Q Series Software 2.0.2, dynamic tube normalization) were analysed with IBM SPSS Statistics 22, Kolmogorov-Smirnov-Tests of Normality with Lillefors Significance Correction and T-test for paired samples (α = 0.05). RT-qPCR and qPCR data on sensitivity (delivered from Rotor Gene Q Series Software 2.0.2, dynamic tube normalization) were analysed with IBM SPSS Statistics 22, Kolmogorov-Smirnov-Tests of Normality with Lillefors Significance Correction and T-test for paired samples (α = 0.05). Bacteria inactivation assay Bacterial strains were treated with PAXgene and formalin according to the DGHM guidelines and European standards EN 1040 for bacteria, considering a reduction of more than 10 5 for bacteria as sufficiently inactivated. Sa was inactivated by PAXgene (Fix and Stabilizer) in 6 out of 6 assays . After treatment with PAXgene Fix only viability in 2 out of 6 assays was above the threshold. No colonies were detected after formalin treatment. Inactivation of Bs was sufficient in 5 out of 6 series of assays for all fixatives . Inactivation of less than 10 5 was detected after PAXgene Fix treatment in 2 out of 6 assays as well as after treatment with PAXgene Fix and Stab in one assay. No Cs colonies were found after 30 min PAXgene Fix and Stab in 3 out of 7 assays whereas 4 assays revealed different amounts of colonies . No colonies or inactivation below the requested 10 5 threshold was observed in 6 out of 7 experimental series with formalin, and in one assay the amount of colonies was not below the threshold. In an additional series of three assays with extended incubation time of 2 hours none of the PAXgene treated and only two of the series with formalin led to sufficient inactivation of Cs . Because Cs was the most resistant bacterial strain it was exposed to 70% ethanol, mimicking the starting condition of tissue processing to investigate synergistic inactivation effects of fixation and tissue processing. No colonies in any of the experiments were detected (data not shown). Pa and Ms were the most sensitive bacterial strains and developed no colonies after fixation in any of four and six experimental series, respectively. Mt colonies were detected after PAXgene treatment in three and after formalin treatment in two out of six series, but inactivation was more than log 5 in all six series. Inactivation of fungi To expand the spectrum of human pathogenic microorganisms various species of fungi were used to investigate the inactivation ability of PAXgene compared to formalin. The starting concentration for all fungi inactivation assays was a turbidity equivalent to a McFarland 4 standard. According to DGHM guidelines and European standards EN 1275 (for yeasts) the requested reduction of more than 10 4 cfu/mL for fungi was reached by PAXgene Fix alone, PAXgene Fix and Stab as well as by formalin for all fungal strains tested. With five different Candida species, some single colonies appeared after PAXgene (Fix and Stab) as well as after formalin treatment but the number was below the requested reduction threshold of 10 4 in all assays. The most resistant species was the black yeast Exophiala dermatitidis showing most colonies and lowest reduction after PAXgene Fix only. PAXgene Fix plus Stab was as effective as formalin treatment. The yeasts Cryptococcus neoformans and Geotrichum candidum were inactivated with similar efficacy by PAXgene and formalin. For testing filamentous fungi mean dilutions up to 10 −12 were necessary to receive evaluable numbers of colonies with PBS treated control samples. All four Aspergillus species and Penicillium chrysogenum yielded 0.8 to 8 cfu/mL after PAXgene fix treatment only. Aspergillus niger , Penicillium chrysogenum and Rhizopus oryzae showed 0.1 to 0.3 cfu/mL after PAXgene Fix and Stab. After formalin treatment, two Aspergillus strains, Penicillium chrysogenum and Cunninghamella bertholletiae 0.1 cfu/mL to 6.8 cfu/mL were detected. CMV inactivation After PAXgene as well as after formalin fixation no specific CMV TRS1 immediate-early gene transcripts were detected by RT-qPCR after 19 days of cultivation. Exposing CMV pellets to tissue processing conditions had no further effect because of complete inactivation already by fixation (data not shown). Impact of fixation on sensitivity of reverse-transcription real-time PCR assay Because PAXgene fixation led to markedly better preservation of RNA and DNA than formalin in human tissue we investigated whether these properties also increased the sensitivity of the detection of viral DNA and transcripts in biological samples, which might be beneficial for diagnostic applications. To address this question, CMV infected MRC-5 cells were fixed either with PAXgene (Fix and Stab) or formalin, and treated with PBS as control. RT-qPCR was performed to detect TRS1 transcripts. Unfixed positive control (PBS) and PAXgene fixed samples resulted in essentially identical Cq (quantification cycle) values. After formalin fixation Cq values of TRS1 were increased by a factor of 4 as compared to PAXgene, indicating a significantly higher sensitivity (p < 0.0001) after PAXgene fixation. This advantageous effect was even more pronounced for GAPDH , which was detected even 10 cycles earlier in PAXgene than in formalin fixed cells . The Cq value of no template control samples was higher than 31. In an attempt to exclude that these differences in PCR sensitivity were due to different RNA isolation methods used, RNA from all samples (PBS, PAXgene and formalin treated) was additionally isolated with the same isolation kit. However, the formalin as well as PAXgene treated samples were not suited for Allprep RNA isolation (Qiagen) which works well for PBS treated samples, making a direct comparison of isolation methods impossible (data not shown). Quantitative real-time PCR sensitivity assay To investigate the impact of PAXgene or formalin fixation on sensitivity of CMV DNA detection a quantitative real-time PCR assay employing the IVD-approved artus CMV RG PCR Kit (Qiagen) was used. CMV-DNA was isolated from infected MRC-5 cells as described above. The assay detected 8,000 copies/μL in PBS treated, 1,000 copies/μL in PAXgene and 165 copies/μL in formalin fixed samples. The difference between PBS and fixed samples (formalin and PAXgene) was highly significant (p < 0.0001) but also between PAXgene and formalin (p < 0.05). Earlier detection of CMV after PAXgene fixation compared to formalin is evident . Bacterial strains were treated with PAXgene and formalin according to the DGHM guidelines and European standards EN 1040 for bacteria, considering a reduction of more than 10 5 for bacteria as sufficiently inactivated. Sa was inactivated by PAXgene (Fix and Stabilizer) in 6 out of 6 assays . After treatment with PAXgene Fix only viability in 2 out of 6 assays was above the threshold. No colonies were detected after formalin treatment. Inactivation of Bs was sufficient in 5 out of 6 series of assays for all fixatives . Inactivation of less than 10 5 was detected after PAXgene Fix treatment in 2 out of 6 assays as well as after treatment with PAXgene Fix and Stab in one assay. No Cs colonies were found after 30 min PAXgene Fix and Stab in 3 out of 7 assays whereas 4 assays revealed different amounts of colonies . No colonies or inactivation below the requested 10 5 threshold was observed in 6 out of 7 experimental series with formalin, and in one assay the amount of colonies was not below the threshold. In an additional series of three assays with extended incubation time of 2 hours none of the PAXgene treated and only two of the series with formalin led to sufficient inactivation of Cs . Because Cs was the most resistant bacterial strain it was exposed to 70% ethanol, mimicking the starting condition of tissue processing to investigate synergistic inactivation effects of fixation and tissue processing. No colonies in any of the experiments were detected (data not shown). Pa and Ms were the most sensitive bacterial strains and developed no colonies after fixation in any of four and six experimental series, respectively. Mt colonies were detected after PAXgene treatment in three and after formalin treatment in two out of six series, but inactivation was more than log 5 in all six series. To expand the spectrum of human pathogenic microorganisms various species of fungi were used to investigate the inactivation ability of PAXgene compared to formalin. The starting concentration for all fungi inactivation assays was a turbidity equivalent to a McFarland 4 standard. According to DGHM guidelines and European standards EN 1275 (for yeasts) the requested reduction of more than 10 4 cfu/mL for fungi was reached by PAXgene Fix alone, PAXgene Fix and Stab as well as by formalin for all fungal strains tested. With five different Candida species, some single colonies appeared after PAXgene (Fix and Stab) as well as after formalin treatment but the number was below the requested reduction threshold of 10 4 in all assays. The most resistant species was the black yeast Exophiala dermatitidis showing most colonies and lowest reduction after PAXgene Fix only. PAXgene Fix plus Stab was as effective as formalin treatment. The yeasts Cryptococcus neoformans and Geotrichum candidum were inactivated with similar efficacy by PAXgene and formalin. For testing filamentous fungi mean dilutions up to 10 −12 were necessary to receive evaluable numbers of colonies with PBS treated control samples. All four Aspergillus species and Penicillium chrysogenum yielded 0.8 to 8 cfu/mL after PAXgene fix treatment only. Aspergillus niger , Penicillium chrysogenum and Rhizopus oryzae showed 0.1 to 0.3 cfu/mL after PAXgene Fix and Stab. After formalin treatment, two Aspergillus strains, Penicillium chrysogenum and Cunninghamella bertholletiae 0.1 cfu/mL to 6.8 cfu/mL were detected. After PAXgene as well as after formalin fixation no specific CMV TRS1 immediate-early gene transcripts were detected by RT-qPCR after 19 days of cultivation. Exposing CMV pellets to tissue processing conditions had no further effect because of complete inactivation already by fixation (data not shown). Because PAXgene fixation led to markedly better preservation of RNA and DNA than formalin in human tissue we investigated whether these properties also increased the sensitivity of the detection of viral DNA and transcripts in biological samples, which might be beneficial for diagnostic applications. To address this question, CMV infected MRC-5 cells were fixed either with PAXgene (Fix and Stab) or formalin, and treated with PBS as control. RT-qPCR was performed to detect TRS1 transcripts. Unfixed positive control (PBS) and PAXgene fixed samples resulted in essentially identical Cq (quantification cycle) values. After formalin fixation Cq values of TRS1 were increased by a factor of 4 as compared to PAXgene, indicating a significantly higher sensitivity (p < 0.0001) after PAXgene fixation. This advantageous effect was even more pronounced for GAPDH , which was detected even 10 cycles earlier in PAXgene than in formalin fixed cells . The Cq value of no template control samples was higher than 31. In an attempt to exclude that these differences in PCR sensitivity were due to different RNA isolation methods used, RNA from all samples (PBS, PAXgene and formalin treated) was additionally isolated with the same isolation kit. However, the formalin as well as PAXgene treated samples were not suited for Allprep RNA isolation (Qiagen) which works well for PBS treated samples, making a direct comparison of isolation methods impossible (data not shown). To investigate the impact of PAXgene or formalin fixation on sensitivity of CMV DNA detection a quantitative real-time PCR assay employing the IVD-approved artus CMV RG PCR Kit (Qiagen) was used. CMV-DNA was isolated from infected MRC-5 cells as described above. The assay detected 8,000 copies/μL in PBS treated, 1,000 copies/μL in PAXgene and 165 copies/μL in formalin fixed samples. The difference between PBS and fixed samples (formalin and PAXgene) was highly significant (p < 0.0001) but also between PAXgene and formalin (p < 0.05). Earlier detection of CMV after PAXgene fixation compared to formalin is evident . Fixatives that on the one hand preserve morphologic features well, and on the other hand do not modify biomolecules, are becoming increasingly important in the context of personalized medicine which often requires combined analysis of classical histo-pathological features and molecular biomarkers. The diagnosis of infectious diseases would also benefit from such fixatives which result in increased sensitivity in the detection of pathogens and, at the same time, provide the opportunity to correlate the presence of pathogens with morphological alterations in tissues. PAXgene, which has been developed and intensively evaluated in the context of the European Framework Programme 7-funded project SPIDIA ( www.SPIDIA.eu ), fulfilled both requirements. However, the exceptionally good preservation of biomolecules in PAXgene fixed tissues raised concerns as to whether it sufficiently inactivates pathogens. Information on the inactivation capabilities of PAXgene were, therefore, required to decide whether PAXgene can be used following the same biosafety rules as established for health care workers involved in processing formalin fixed biological samples. The results obtained in this study showed similar inactivation activity of PAXgene and formalin for bacteria and fungi. The relevance of the less efficient inactivation of Cs by PAXgene is difficult to interpret, particularly because only few systematic studies on formalin have been published, and due to the absence of specific guidelines for fixatives. Even formalin could not sufficiently inactivate Cs in three out of ten assays, independent of inactivation time suggesting incomplete inactivation capabilities. Reports on formalin concerning incomplete inactivation of picornaviruses causing poliomyelitis and foot-and-mouth disease and a comparative study investigating different bacterial strains are in line with our observation. Since routine tissue fixation is followed by tissue processing comprising exposure of infected samples to increasing concentrations of alcohol further inactivation of pathogens after processing and paraffin embedding is expected. Indeed, we found sufficient inactivation of the most resistant strain tested ( Cs ) by simulating the alcohol processing steps in our study. However there are also infectious agents, such as prions, which are not inactivated by formalin or alcohol . Therefore, fixed tissues cannot be considered as not infectious in general and cautious handling following biosafety regulations is recommended independent of the fixation method . Our studies with CMV not only confirmed the immunohistochemistry results of CMV inactivation by PAXgene as reported previously by using a more sensitive RT-qPCR assay but also revealed potentially superior features of PAXgene fixation for molecular diagnosis of pathogens. There is an increasing need for more sensitive and accurate detection of pathogens, particularly in the field of transplantation medicine . We found a 6-fold increased sensitivity for CMV DNA detection when employing an IVD-approved kit and a 16-fold significantly increased sensitivity to detect CMV transcripts in PAXgene fixed samples compared to formalin. Since a direct comparison of different fixatives in PCR-based assays is hampered by the fact that differently fixed samples may require different nucleic acid isolation protocols we compared the best achieved sensitivity using the optimised isolation protocol for each of the fixatives. PAXgene fixed samples more closely resembled unfixed samples in PCR-assays which are in line with previous observations that PAXgene interferes less with sample pre-analytics than formalin . The properties of PAXgene fixation of excellent preservation of nucleic acids and morphology might be of particular relevance in the context of transplantation medicine where assessment of morphological features of organ rejection and highly sensitive molecular tests for detection of pathogens ideally have to be performed from the same tissue biopsies.
Perceptions of supervision and feedback in PaedCompenda, the competency-based, post-graduate curriculum in pediatrics (www.paedcompenda.de)
cac866f9-6e9d-4d38-8eec-5282c3965630
11656182
Pediatrics[mh]
The professionalization of post-licensure medical training in primary care pediatrics is being advanced using a digital, competency-based curriculum called PaedCompenda (PC) [ https://www.paedcompenda.de/ ] . However, despite broad approval, the implementation of this structured curriculum faces many obstacles which cannot be reduced solely to the notorious lack of time in the medical practices, but are also due to the lack of acceptance for the new teaching strategies accompanying the implementation of PC. At the start of a three-year-long research project (2019-2022) on the implementation of PC at 19 pediatric practices belonging to two model regions (Schleswig-Holstein and Mittelfranken), the feedback given by medical specialists to physician trainees directly after observing their interactions with patients was noticeably viewed with criticism, as a core teaching strategy, in terms of the time spent and the expected benefit. This skepticism about the teaching method of observation-based feedback in the training of medical specialists is a known and much discussed issue. Many reasons have been identified in studies to explain this hesitation, primarily the unfavorable conditions of workplace-based learning in hospitals, a low quality of feedback, and the much too formal design of the observation and feedback procedures, particularly in the context of test-relevant assessments (e.g., Mini-CEX and DOPS) , , , , , , , , . As a result, these implementation difficulties have generated numerous interventional studies. Weallans et al. evaluate findings in an up-do-date review and identify empirically proven, but not yet sufficiently evidence-based, methodological components and principles that promise effective observation-based feedback in post-graduate learning environments. The growing understanding that the effectiveness of feedback ultimately “depends on many variables, which overlap in individual cases and can inhibit or promote each other in their effects” refers to the limits of interventional studies that investigate single variables derived from theory and obligate the test subjects, for the duration of the study, to follow an approach that is questionable in terms of sustainability because it is heavily regimented and independent from the situation. Given this background, the medical practices participating in the implementation project were intentionally not required to follow a uniform approach. Although the teaching strategy was piloted in introductory seminars, when it came to implementation, each medical practice was called upon to seek a viable way that promised sustainability. Instead of prescribing a concrete approach from the outset, the initial focus was meant to be on the question central to this paper concerning the trainees’ subjective perceptions of the qualitative differences in concretely experienced supervision and feedback . Based on this, in a reversal, so to speak, of the method described above, content-specific and methodological components were reconstructed which, in their interactions, are able to explain the differences in the trainees’ perceptions of the benefits and usefulness. Following such an explanatory interpretive analysis , , , , it is possible, considering the findings of previous studies, to draw conclusions regarding the further development of this teaching strategy which does justice to the complex reality of individual instances of supervision and feedback. The dataset, on which the results presented here are based, encompasses problem-oriented entrance and exit interviews , with 28 physician trainees at 19 medical practices and four focus group discussions with 28 physician trainees at four different pediatric hospitals who had no experience with PC. In addition, six feedback conferences conducted by four trainers at the medical practices and 23 patient interactions with the physician trainees were videotaped. Feedback conferences lasting 60 to 90 minutes were held by the physician trainee, medical teacher, and researcher (see attachment 1 ). Verbatim transcripts were made of all audio and video recordings. The relevant questions guiding the interviews are contained in the attachment, although they also point toward other questions pursued by the research project (see attachment 2 ). The selection of the first 10 medical practices and their physician trainees was done pragmatically: Enquiries were made at practices in the German regions of Mittelfranken and Schleswig-Holstein that cooperated with hospitals via networks. The other data was collected afterwards, step by step, and followed the principles known in grounded theory as minimum and maximum contrast (among other things, the varying levels of experience amongst the physician trainees and the trainers). Even if this kind of systematic comparative analysis does not yield any statistically representative information, it does, however, enable explanatory and interpretive statements about an investigated subject , . Data analysis was done using the computer program f4analyse in pursuance with reconstructive grounded theory . This method is more consistent than traditional grounded theory in accounting for knowledge that is not reflexively available, as expressed in narratives and the description of experiences. In this way, it is possible to reconstruct the implicit learning that has great significance for decision-making skills and responsibility , . The coding procedure used is capable of differentiating between socially desirable or singular statements and those statements with a high degree of personal relevance pointing to the heart of the matter concerning the benefits focused on here. Statements made by the physician trainees were coded consistently in the context of meaning of the entire interview. Furthermore, central text passages underwent detailed sequential analysis (see examples in attachment 3 ), in which coding was done not only in terms of content, but also in regard to form: how and in which context something is said . This analytical method does not allow for a purely descriptive presentation of results, meaning that the separation of description and interpretation, as established in quantitative social research, cannot be adequately converted into a qualitative form of social research. This is because the findings are made on the basis of interviews, the meanings of which cannot be simply “read” from the literal meanings of the words, but rather require a methodologically controlled process of interpretation . in the physician trainees at the participating practices about being supervised and then receiving feedback. Although feedback, particularly constructively critical feedback, had been missed in previous post-graduate training, their hesitation was still impossible to overlook when it involved actually implementing the teaching method. To explain these reservations, their experience with receiving feedback in the hospital setting was examined more closely. Overall, it was noticeable that descriptions of problematic or absent feedback predominated; only four of 56 physician trainees with hospital experience reported receiving feedback that explicitly enhanced their learning during their post-graduate training. As a result, it was possible to reconstruct key characteristics and aspects of problematic experiences with supervision and feedback that lead toward a negative perception . At the same time there were coping strategies by the physician trainees identified that cemented their lack of acceptance for situations involving supervision and feedback. A summary of these findings is presented in figure 1 . Over the course of implementing PaedCompenda at the medical practices, the level of acceptance for the teaching method has improved in all of the surveyed physician trainees. However, a distinction must be drawn between an unreservedly positive perception and a less positive perception of the benefits. In the following, findings are presented that justify these distinctions. Less positive perceptions at the participating medical practices Figure 2 lists characteristics and aspects of feedback experiences leading to a reduction in the positive perception in the physician trainees at the medical practices, along with typical coping strategies, in the same manner as figure 1 . It is impossible to address all of these aspects within the scope of this paper. Those which are covered in detail in the following represent a deliberate selection from the larger picture and involve aspects which have not previously received much attention in the literature. Expectations of being imitated without justifiable reason One type of feedback which was experienced negatively by the physician trainees at the medical practices involves an incomprehensible or almost incomprehensible expectation of imitation . Specifically, when the recommended corrections in the trainer's feedback do not align with the guidelines or evidence-based recommendations and no further justification is given for this deviation. Such eminence-based, rather than evidence-based, judgements promote “playing along” behavior in situations of supervision, as is made clear in the following statement: In fact, if I know that the one supervisor is present, then I do the work more like he wants it done. And when the other supervisor is there, then I do it a little more like she wants it done. If I am alone, I choose somewhere in-between the two that makes sense to me (MF 6). This behavior is heightened if trainers position themselves as irrefutable role models in the way that they give feedback and become very much caught up in a paradigm of right or wrong ; when “this is how I do it” turns into “this is how it is done” . As a result, specifically advanced and self-confident physician trainees feel themselves restricted in their autonomy, an aspect that in the field of education generally is a central factor of sustained motivation to learn . In response, these trainees emphasize their skill at assessing themselves and their ability to self-monitor, hence relativizing the benefits of being observed and receiving feedback. Unspecific and ambivalent feedback Similarly frustrating for physician trainees is feedback that lacks specificity and/or feedback in the form of taciturn statements that open up a wide scope for interpretation which, depending on personal disposition, tend to be interpreted either in a self-serving or a self-critical way, as is shown in the following example: Interviewer: Did she (the trainer) then give some feedback at the end? Interviewee: Not explicitly. (Both laugh.) I mean, she did not say: “You did that well”. But then I did – what one is only so happy to do in these cases – I said something like: “Yeah, well, I was suddenly really nervous when you were looking over my shoulder at what I was doing”. And in her reply, it was possible to make out that she thought everything was okay. (SH 13) The feedback from the trainer in this example is specifically not explicit and the consequence is that, although the physician trainee was able to “make out” some message in the trainer’s reaction, one could also say that the trainee was required to do precisely this. In the videotaped feedback conferences, it was also seen that the trainers routinely followed up critical comments with a kind of relativization, in which they emphasized that what was happening here was really just complaining about nothing truly important . As a result, the criticism took on a distinctly ambivalent character – something that would rather be hidden or only handled with kid gloves. This also corresponds to the fear often expressed by the trainers in the feedback workshops that their critical comments will compromise or discourage the trainees. In such an atmosphere of inhibited criticism, a natural culture of observation and feedback can hardly be established: Physician trainees primarily remember the situations where they were observed and received feedback as an uncomfortable learning environment with limited benefits. As a consequence, these physician trainees attempt to avoid the trainer's attention and, in this context, the purported lack of time is only too useful for steering clear of such situations. Positive perceptions at the participating medical practices The methodological and content-based features of observation-based feedback favoring an unreservedly positive perception of its benefits are presented in figure 3 . Regarding the methodological components , it is possible to summarize so: If feedback from the trainer is informed mainly by an atmosphere of learning and not a performance evaluation, then the desire “to do it perfectly for the colleague” can fade into the background: I am just not always good and that’s the reason I am still learning. And even once I have learned everything, there are probably still areas where I make mistakes or things that I can do better. And, yes, that is actually the hardest point, to unlearn that, to think about how you make mistakes and embarrass yourself. (SH 6) This statement reveals how hard it is for physicians to separate themselves from the internalized expectation of infallibility. If trainers are successful in enabling the physician trainees to experience critical feedback primarily as a learning opportunity and not as a source of shame or poor evaluation, then perceptions will become distinctly more positive. Such an experience mainly occurs when the supervision is focused specifically on separate teachable moments and if there is a two-way dialogue exploring the desired and undesired effects of the medical work that was observed. Very often this is not about whether the treatment administered during a medical consultation was right or wrong, but rather the ways in which the quality of care could be increased. Or conversely, which approaches/measures taken by the physician trainee were particularly effective. Some trainers even used this as an opportunity to scrutinize their own practices and thereby showed themselves to be open to learning. The latter makes it easier for novices to step outside of the right/wrong dualism mentioned above and let go of the “fear of doing something wrong” . In terms of content , these kinds of teachable moments are especially distinguished by a sense that they enable a conscious formation of previously and reflexively inaccessible limits and potentials of one’s own actions (for an example, see the fine analysis in attachment 3 ). The following goes into brief detail regarding the three central characteristics listed in figure 3 . Feedback on dysfunctional routines Feedback addressing the suboptimal routines acquired and unquestioningly reproduced by physician trainees in the first few years of their professional practice is described as very helpful: There are so many things, beginning with set phrases that one has gotten used to and doesn’t really notice, but also medical details or finer points or imprecisions that have crept in. Or when explaining to patients: How do I explain the use of a therapy or medication, something that I, myself, don’t quite realize I am doing with imprecision or maybe in a way that isn’t easy to understand. (SH 4) Based on these insights, the physician trainee quoted above judges the supervision of patient-doctor interactions by his trainer as “one of the best parts” of his training at the medical practice. The reason for his positive assessment lies predominantly in the identification of his routine’s unintended effects. Looking back, the routines acquired in the hospital setting are sometimes even described as a kind of “muddling through” . At any rate, whenever physician trainees receive this type of feedback, they are able to see that in a variety of areas they persistently remain at a “good-enough level” because this kind of mentorship was absent from the learning process. Feedback on underlying lack of confidence Much of the uncertainty that physician trainees feel at the beginning of their specialist training at a medical practice can be dispensed with by gathering increased experience and having the opportunity to consult with specialist practitioners when needed. That said, there are other insecurities which are unacknowledged or unconscious or considered not important enough that they need to be articulated – and as a result remain hidden. Physician trainees view it as extremely beneficial when these underlying issues are focused on in the feedback from observing their work and discussed with the aim of finding solutions. It is not seldom that, precisely on these occasions, competencies they were unaware of get mirrored back to them and this contributes to building confidence. In terms of topics, this often involves a lack of confidence in adequately structuring the medical consultation given specific case details, in quicker and more prudent processes of clinical reasoning, and in the competencies associated with holding conversations with patients (for more details: see figure 3 ). Feedback on overlooked or omitted problems The random nature of patient consultations in primary outpatient care poses an often unexpected challenge for physician trainees. It demands of them a wide-ranging perspective on case histories and diagnoses and, likewise, a heightened alertness for subclinical physical, emotional and cognitive symptoms in children. Furthermore, they must also take notice of the conduct and behavior exhibited by parents. Feedback on supervised situations showing physician trainees where they have overlooked a problem, or even ignored it, is experienced as a clear gain in the required skill of perceiving patients accurately: For example, we had the situation where a mother picked the child up by her hands so that it could sort of sit there, but then she couldn’t really sit down herself, and it all looked like something that was routinely done. Then he drew my attention to it by saying, “Look at this situation…”." I had seen it but hadn’t registered it that way. That was then very helpful and you can learn pretty simply what things you need to pay a bit of attention to. (SH 12) It is significant that “seeing” and “registering” are described in this quote as being two different things. A common insecurity is made visible here: How does what is seen impact a physician's actions during the complex event of a patient-parent interaction? If appropriate feedback does not come from the trainer, what the physician trainee has seen often remains without consequence and is thus occluded. These and similar normalizations of discomfort and irritation were seen frequently on the videotapes of early detection exams (see attachment 1 ). Situations which were certainly perceived as problematic, even unsettling, were responded to with previously learned routines and not seen as unusual or unexpected situations that asked for a flexible response tailored to fit the individual case. When such situations are recalled during the feedback conference and reflected upon together in search of alternatives, then the judgement about the benefits of being supervised and receiving feedback are unreservedly positive: Being supervised makes total sense, 100 percent. As stupid as it is, but again, afterwards, those two videos: It happens a lot, when I am alone in a consultation, screening a patient, I am reminded of things....I find that super important. (MF 1) In this statement, the physician trainee refers to detailed feedback on videotapes of patient consultations. In general, it is seen that by watching the videos together, it is possible to very successfully increase one's powers of perception and then realize that there is a need to learn. Figure 2 lists characteristics and aspects of feedback experiences leading to a reduction in the positive perception in the physician trainees at the medical practices, along with typical coping strategies, in the same manner as figure 1 . It is impossible to address all of these aspects within the scope of this paper. Those which are covered in detail in the following represent a deliberate selection from the larger picture and involve aspects which have not previously received much attention in the literature. Expectations of being imitated without justifiable reason One type of feedback which was experienced negatively by the physician trainees at the medical practices involves an incomprehensible or almost incomprehensible expectation of imitation . Specifically, when the recommended corrections in the trainer's feedback do not align with the guidelines or evidence-based recommendations and no further justification is given for this deviation. Such eminence-based, rather than evidence-based, judgements promote “playing along” behavior in situations of supervision, as is made clear in the following statement: In fact, if I know that the one supervisor is present, then I do the work more like he wants it done. And when the other supervisor is there, then I do it a little more like she wants it done. If I am alone, I choose somewhere in-between the two that makes sense to me (MF 6). This behavior is heightened if trainers position themselves as irrefutable role models in the way that they give feedback and become very much caught up in a paradigm of right or wrong ; when “this is how I do it” turns into “this is how it is done” . As a result, specifically advanced and self-confident physician trainees feel themselves restricted in their autonomy, an aspect that in the field of education generally is a central factor of sustained motivation to learn . In response, these trainees emphasize their skill at assessing themselves and their ability to self-monitor, hence relativizing the benefits of being observed and receiving feedback. Unspecific and ambivalent feedback Similarly frustrating for physician trainees is feedback that lacks specificity and/or feedback in the form of taciturn statements that open up a wide scope for interpretation which, depending on personal disposition, tend to be interpreted either in a self-serving or a self-critical way, as is shown in the following example: Interviewer: Did she (the trainer) then give some feedback at the end? Interviewee: Not explicitly. (Both laugh.) I mean, she did not say: “You did that well”. But then I did – what one is only so happy to do in these cases – I said something like: “Yeah, well, I was suddenly really nervous when you were looking over my shoulder at what I was doing”. And in her reply, it was possible to make out that she thought everything was okay. (SH 13) The feedback from the trainer in this example is specifically not explicit and the consequence is that, although the physician trainee was able to “make out” some message in the trainer’s reaction, one could also say that the trainee was required to do precisely this. In the videotaped feedback conferences, it was also seen that the trainers routinely followed up critical comments with a kind of relativization, in which they emphasized that what was happening here was really just complaining about nothing truly important . As a result, the criticism took on a distinctly ambivalent character – something that would rather be hidden or only handled with kid gloves. This also corresponds to the fear often expressed by the trainers in the feedback workshops that their critical comments will compromise or discourage the trainees. In such an atmosphere of inhibited criticism, a natural culture of observation and feedback can hardly be established: Physician trainees primarily remember the situations where they were observed and received feedback as an uncomfortable learning environment with limited benefits. As a consequence, these physician trainees attempt to avoid the trainer's attention and, in this context, the purported lack of time is only too useful for steering clear of such situations. One type of feedback which was experienced negatively by the physician trainees at the medical practices involves an incomprehensible or almost incomprehensible expectation of imitation . Specifically, when the recommended corrections in the trainer's feedback do not align with the guidelines or evidence-based recommendations and no further justification is given for this deviation. Such eminence-based, rather than evidence-based, judgements promote “playing along” behavior in situations of supervision, as is made clear in the following statement: In fact, if I know that the one supervisor is present, then I do the work more like he wants it done. And when the other supervisor is there, then I do it a little more like she wants it done. If I am alone, I choose somewhere in-between the two that makes sense to me (MF 6). This behavior is heightened if trainers position themselves as irrefutable role models in the way that they give feedback and become very much caught up in a paradigm of right or wrong ; when “this is how I do it” turns into “this is how it is done” . As a result, specifically advanced and self-confident physician trainees feel themselves restricted in their autonomy, an aspect that in the field of education generally is a central factor of sustained motivation to learn . In response, these trainees emphasize their skill at assessing themselves and their ability to self-monitor, hence relativizing the benefits of being observed and receiving feedback. Similarly frustrating for physician trainees is feedback that lacks specificity and/or feedback in the form of taciturn statements that open up a wide scope for interpretation which, depending on personal disposition, tend to be interpreted either in a self-serving or a self-critical way, as is shown in the following example: Interviewer: Did she (the trainer) then give some feedback at the end? Interviewee: Not explicitly. (Both laugh.) I mean, she did not say: “You did that well”. But then I did – what one is only so happy to do in these cases – I said something like: “Yeah, well, I was suddenly really nervous when you were looking over my shoulder at what I was doing”. And in her reply, it was possible to make out that she thought everything was okay. (SH 13) The feedback from the trainer in this example is specifically not explicit and the consequence is that, although the physician trainee was able to “make out” some message in the trainer’s reaction, one could also say that the trainee was required to do precisely this. In the videotaped feedback conferences, it was also seen that the trainers routinely followed up critical comments with a kind of relativization, in which they emphasized that what was happening here was really just complaining about nothing truly important . As a result, the criticism took on a distinctly ambivalent character – something that would rather be hidden or only handled with kid gloves. This also corresponds to the fear often expressed by the trainers in the feedback workshops that their critical comments will compromise or discourage the trainees. In such an atmosphere of inhibited criticism, a natural culture of observation and feedback can hardly be established: Physician trainees primarily remember the situations where they were observed and received feedback as an uncomfortable learning environment with limited benefits. As a consequence, these physician trainees attempt to avoid the trainer's attention and, in this context, the purported lack of time is only too useful for steering clear of such situations. The methodological and content-based features of observation-based feedback favoring an unreservedly positive perception of its benefits are presented in figure 3 . Regarding the methodological components , it is possible to summarize so: If feedback from the trainer is informed mainly by an atmosphere of learning and not a performance evaluation, then the desire “to do it perfectly for the colleague” can fade into the background: I am just not always good and that’s the reason I am still learning. And even once I have learned everything, there are probably still areas where I make mistakes or things that I can do better. And, yes, that is actually the hardest point, to unlearn that, to think about how you make mistakes and embarrass yourself. (SH 6) This statement reveals how hard it is for physicians to separate themselves from the internalized expectation of infallibility. If trainers are successful in enabling the physician trainees to experience critical feedback primarily as a learning opportunity and not as a source of shame or poor evaluation, then perceptions will become distinctly more positive. Such an experience mainly occurs when the supervision is focused specifically on separate teachable moments and if there is a two-way dialogue exploring the desired and undesired effects of the medical work that was observed. Very often this is not about whether the treatment administered during a medical consultation was right or wrong, but rather the ways in which the quality of care could be increased. Or conversely, which approaches/measures taken by the physician trainee were particularly effective. Some trainers even used this as an opportunity to scrutinize their own practices and thereby showed themselves to be open to learning. The latter makes it easier for novices to step outside of the right/wrong dualism mentioned above and let go of the “fear of doing something wrong” . In terms of content , these kinds of teachable moments are especially distinguished by a sense that they enable a conscious formation of previously and reflexively inaccessible limits and potentials of one’s own actions (for an example, see the fine analysis in attachment 3 ). The following goes into brief detail regarding the three central characteristics listed in figure 3 . Feedback on dysfunctional routines Feedback addressing the suboptimal routines acquired and unquestioningly reproduced by physician trainees in the first few years of their professional practice is described as very helpful: There are so many things, beginning with set phrases that one has gotten used to and doesn’t really notice, but also medical details or finer points or imprecisions that have crept in. Or when explaining to patients: How do I explain the use of a therapy or medication, something that I, myself, don’t quite realize I am doing with imprecision or maybe in a way that isn’t easy to understand. (SH 4) Based on these insights, the physician trainee quoted above judges the supervision of patient-doctor interactions by his trainer as “one of the best parts” of his training at the medical practice. The reason for his positive assessment lies predominantly in the identification of his routine’s unintended effects. Looking back, the routines acquired in the hospital setting are sometimes even described as a kind of “muddling through” . At any rate, whenever physician trainees receive this type of feedback, they are able to see that in a variety of areas they persistently remain at a “good-enough level” because this kind of mentorship was absent from the learning process. Feedback on underlying lack of confidence Much of the uncertainty that physician trainees feel at the beginning of their specialist training at a medical practice can be dispensed with by gathering increased experience and having the opportunity to consult with specialist practitioners when needed. That said, there are other insecurities which are unacknowledged or unconscious or considered not important enough that they need to be articulated – and as a result remain hidden. Physician trainees view it as extremely beneficial when these underlying issues are focused on in the feedback from observing their work and discussed with the aim of finding solutions. It is not seldom that, precisely on these occasions, competencies they were unaware of get mirrored back to them and this contributes to building confidence. In terms of topics, this often involves a lack of confidence in adequately structuring the medical consultation given specific case details, in quicker and more prudent processes of clinical reasoning, and in the competencies associated with holding conversations with patients (for more details: see figure 3 ). Feedback on overlooked or omitted problems The random nature of patient consultations in primary outpatient care poses an often unexpected challenge for physician trainees. It demands of them a wide-ranging perspective on case histories and diagnoses and, likewise, a heightened alertness for subclinical physical, emotional and cognitive symptoms in children. Furthermore, they must also take notice of the conduct and behavior exhibited by parents. Feedback on supervised situations showing physician trainees where they have overlooked a problem, or even ignored it, is experienced as a clear gain in the required skill of perceiving patients accurately: For example, we had the situation where a mother picked the child up by her hands so that it could sort of sit there, but then she couldn’t really sit down herself, and it all looked like something that was routinely done. Then he drew my attention to it by saying, “Look at this situation…”." I had seen it but hadn’t registered it that way. That was then very helpful and you can learn pretty simply what things you need to pay a bit of attention to. (SH 12) It is significant that “seeing” and “registering” are described in this quote as being two different things. A common insecurity is made visible here: How does what is seen impact a physician's actions during the complex event of a patient-parent interaction? If appropriate feedback does not come from the trainer, what the physician trainee has seen often remains without consequence and is thus occluded. These and similar normalizations of discomfort and irritation were seen frequently on the videotapes of early detection exams (see attachment 1 ). Situations which were certainly perceived as problematic, even unsettling, were responded to with previously learned routines and not seen as unusual or unexpected situations that asked for a flexible response tailored to fit the individual case. When such situations are recalled during the feedback conference and reflected upon together in search of alternatives, then the judgement about the benefits of being supervised and receiving feedback are unreservedly positive: Being supervised makes total sense, 100 percent. As stupid as it is, but again, afterwards, those two videos: It happens a lot, when I am alone in a consultation, screening a patient, I am reminded of things....I find that super important. (MF 1) In this statement, the physician trainee refers to detailed feedback on videotapes of patient consultations. In general, it is seen that by watching the videos together, it is possible to very successfully increase one's powers of perception and then realize that there is a need to learn. Feedback addressing the suboptimal routines acquired and unquestioningly reproduced by physician trainees in the first few years of their professional practice is described as very helpful: There are so many things, beginning with set phrases that one has gotten used to and doesn’t really notice, but also medical details or finer points or imprecisions that have crept in. Or when explaining to patients: How do I explain the use of a therapy or medication, something that I, myself, don’t quite realize I am doing with imprecision or maybe in a way that isn’t easy to understand. (SH 4) Based on these insights, the physician trainee quoted above judges the supervision of patient-doctor interactions by his trainer as “one of the best parts” of his training at the medical practice. The reason for his positive assessment lies predominantly in the identification of his routine’s unintended effects. Looking back, the routines acquired in the hospital setting are sometimes even described as a kind of “muddling through” . At any rate, whenever physician trainees receive this type of feedback, they are able to see that in a variety of areas they persistently remain at a “good-enough level” because this kind of mentorship was absent from the learning process. Much of the uncertainty that physician trainees feel at the beginning of their specialist training at a medical practice can be dispensed with by gathering increased experience and having the opportunity to consult with specialist practitioners when needed. That said, there are other insecurities which are unacknowledged or unconscious or considered not important enough that they need to be articulated – and as a result remain hidden. Physician trainees view it as extremely beneficial when these underlying issues are focused on in the feedback from observing their work and discussed with the aim of finding solutions. It is not seldom that, precisely on these occasions, competencies they were unaware of get mirrored back to them and this contributes to building confidence. In terms of topics, this often involves a lack of confidence in adequately structuring the medical consultation given specific case details, in quicker and more prudent processes of clinical reasoning, and in the competencies associated with holding conversations with patients (for more details: see figure 3 ). The random nature of patient consultations in primary outpatient care poses an often unexpected challenge for physician trainees. It demands of them a wide-ranging perspective on case histories and diagnoses and, likewise, a heightened alertness for subclinical physical, emotional and cognitive symptoms in children. Furthermore, they must also take notice of the conduct and behavior exhibited by parents. Feedback on supervised situations showing physician trainees where they have overlooked a problem, or even ignored it, is experienced as a clear gain in the required skill of perceiving patients accurately: For example, we had the situation where a mother picked the child up by her hands so that it could sort of sit there, but then she couldn’t really sit down herself, and it all looked like something that was routinely done. Then he drew my attention to it by saying, “Look at this situation…”." I had seen it but hadn’t registered it that way. That was then very helpful and you can learn pretty simply what things you need to pay a bit of attention to. (SH 12) It is significant that “seeing” and “registering” are described in this quote as being two different things. A common insecurity is made visible here: How does what is seen impact a physician's actions during the complex event of a patient-parent interaction? If appropriate feedback does not come from the trainer, what the physician trainee has seen often remains without consequence and is thus occluded. These and similar normalizations of discomfort and irritation were seen frequently on the videotapes of early detection exams (see attachment 1 ). Situations which were certainly perceived as problematic, even unsettling, were responded to with previously learned routines and not seen as unusual or unexpected situations that asked for a flexible response tailored to fit the individual case. When such situations are recalled during the feedback conference and reflected upon together in search of alternatives, then the judgement about the benefits of being supervised and receiving feedback are unreservedly positive: Being supervised makes total sense, 100 percent. As stupid as it is, but again, afterwards, those two videos: It happens a lot, when I am alone in a consultation, screening a patient, I am reminded of things....I find that super important. (MF 1) In this statement, the physician trainee refers to detailed feedback on videotapes of patient consultations. In general, it is seen that by watching the videos together, it is possible to very successfully increase one's powers of perception and then realize that there is a need to learn. In the following, focus is placed on the question of how a positive perception of supervision and feedback and, hence, an increase in acceptance of this teaching and learning method could be encouraged in physician trainees. These considerations corroborate and underscore previous findings and recommendations in the literature. First, the results emphasize a now widely stated recommendation, according to which observation-based feedback in medical education and post-licensure training should be designed as an assessment for learning rather than an assessment of learning . With this, the trainees learning goal orientation should be strengthened, while their performance goal orientation fades into the background , , . That precisely this has relevance in highly regimented, performance-based educational systems is shown by analyses of the learning culture in medicine , , whereby the critical discussion predominantly centers on the feedback settings in the context of test-relevant evaluations (summative feedback). For example, after many years of experience with Mini-CEX, the Royal College judged that feedback content “is often lacking, ineffective, excessively positive and commonly avoids negative aspects” . In the German-speaking countries, also, studies contain many indications of a limited perception of the usefulness or benefits, for instance, due to the excessive formalization and standardization of feedback procedures , , . In response to this, a variety of educational institutions have modified their evaluation forms and replaced a reductionist, scale-based mode of scoring with an open-ended, criteria-guided form of evaluation , . Our three-year-long research project came to this same conclusion: The original feedback forms were revised based on the SIWF form . One aim was to formulate general evaluation criteria which encompass the range of expectations – medical, organizational and communicative-specific to general ambulatory pediatrics (cf. attachment 4 ). The sources for this evaluative “anchor” are guidelines, textbooks on primary pediatric care , and analyses of recorded patient consultations . Given a context in which trainers impose expectations of imitation that physician trainees can find hard to comprehend, as described above, these criteria ideally provide a corrective measure against individual instances of eminence-based feedback . This function can be significantly supported through the regular sharing of information on standards of good outpatient care in quality circle meetings for trainers, as they have been implemented in the field of pediatrics. Also consistent with other studies is our finding that, based on observed patient consultations, a friendly correction of one’s self-estimation/self-image or a productive disturbance of the conviction that one has sufficient ability to self-assess contributes to the perception of the effectiveness of feedback . It has been found numerous times that the development of adequate self-assessment requires professional assessment by an outside person , , . Our results support the conclusion reached by Jünger et al. , in which instructive feedback must mainly aim to identify unconscious competencies and deficits . The extent to which the three content-based characteristics we identify in meaningful feedback can be generalized needs further empirically clarification, as the question regarding the instructive content in feedback has so far remained mostly unexamined (cf. ). However, bearing in mind the literature on learning theory, there are indications that the feedback on dysfunctional routines and the feedback on underlying lack of confidence described here increases the perceived instructive content of the feedback beyond the scope of general ambulatory pediatrics. For example, the concept of “deliberate practice” , makes it clear that frequent repetition of activities to gain a better routine is not sufficient in itself to develop a qualitatively high degree of expertise. Rather this demands the realization of unfavorable automatisms that block improvement. Taken within the context here, this means that “deliberate practice” by physician trainees begins when previously learned routines are questioned in terms of their merit and when insecurities that arise while interacting with patients are not dispensed with by normalizing them, but rather understood to be a call for continued learning. It makes sense that this requires a conducive learning environment and professional mentorship that supports a desire to reflect , . Although the time factor plays a crucial role, it is not sufficient by itself. This requires trainers who do not act like irrefutable role models , but rather use the training of young physicians to ponder their own practices and, in this respect, to experience supervision and feedback as a source of inspiration for themselves. The third focal area, described above ( omitted and overlooked problems , see figure 3 ) and perceived as highly beneficial, affects medical activities connected with primary prevention specifically. Nonetheless, a more generalizable competency dimension is also present that can be strengthened through observation-based feedback: In addition to correctly applying “routine expertise” in familiar situations, this involves the ability to develop “adaptive expertise” to deal with novel or unexpected situations , . Although learners are at first tasked with acquiring confidence by developing and training their routine knowledge and abilities, they should still learn early on that a strictly schematic application of medical rules is not always appropriate or conducive . Because they will regularly – as shown above – find themselves confronted with situations that can only be dealt with effectively if they are not perceived as familiar standard situations, but rather met with adaptive measures (cf. the anamnesis example: ). In our experience, fostering this kind of cognitive flexibility is particularly successful when feedback is given on videotaped patient consultations because the slowed-down viewing of the events allows the complexity of the situation to become visible and the question of how to apply expertise in a way that is adapted to the case and the situation is asked almost automatically. Previous studies also confirm that video-recordings as part of the feedback have a potential to differentiate the feedback’s content and increase its perceived usefulness , , , . As of now, it has been rather uncommon to investigate in more detail what is required of the trainers. According to what we observe, productive reflections on a videotape also entail genuine interest on the part of the trainer concerning “deliberate practice” and an associated no-blame culture that encourages and promotes learning ]. This aspect should certainly be the focus of further study. Summarized in table 1 are several recommended formats derived from the results for training the trainers and mentors who, according to our experience, can support a learning-focused feedback culture. Limitations This paper draws on the experience-based perceptions of the physician trainees. As a result, characteristics and processes of supervision and feedback come into view that explain the subjectively perceived differences in quality. However, based on the selected research design, the extent to which these differences in the perception of benefits or usefulness are, in fact, also reflected in modified action still remains open. This would require, for example, a systematic and longitudinal observation of real patient interactions for many physician trainees in combination with the corresponding feedback conferences with their trainers in order to reconstruct potential connections. The sampling was limited to medical practices that had not been randomly selected for the pilot project; rather, it was more the case that the trainers voluntarily chose to participate. A certain self-selection can be assumed in terms of motivation and possibly also a self-assumed competency on the part of the trainers regarding observation-based feedback. Nevertheless, the results contained examples of more positive perceptions of the benefits and usefulness and also less positive ones. The requirement for theoretical (as opposed to statistical) representativeness, in which an investigated phenomenon should be explored as comprehensively as possible through a sample with regard to its various theoretical aspects, can therefore nevertheless be considered as fulfilled here. Likewise, it can be asserted that the reservations mentioned in the introduction regarding supervision and feedback certainly cannot be placed solely on the shoulders of the physician trainees. As indicated throughout the paper, trainers also face the question of how they are going to handle a situation and which obstacles may exist on their part when attempting to implement ways to give supportive feedback. This needs to be looked at more closely in further studies. And finally, considering the always situated and context-dependent experiences of the trainees, the question arises to what extent the findings from the pediatric context can be generalized to other medical areas or even to feedback processes as a whole. This paper draws on the experience-based perceptions of the physician trainees. As a result, characteristics and processes of supervision and feedback come into view that explain the subjectively perceived differences in quality. However, based on the selected research design, the extent to which these differences in the perception of benefits or usefulness are, in fact, also reflected in modified action still remains open. This would require, for example, a systematic and longitudinal observation of real patient interactions for many physician trainees in combination with the corresponding feedback conferences with their trainers in order to reconstruct potential connections. The sampling was limited to medical practices that had not been randomly selected for the pilot project; rather, it was more the case that the trainers voluntarily chose to participate. A certain self-selection can be assumed in terms of motivation and possibly also a self-assumed competency on the part of the trainers regarding observation-based feedback. Nevertheless, the results contained examples of more positive perceptions of the benefits and usefulness and also less positive ones. The requirement for theoretical (as opposed to statistical) representativeness, in which an investigated phenomenon should be explored as comprehensively as possible through a sample with regard to its various theoretical aspects, can therefore nevertheless be considered as fulfilled here. Likewise, it can be asserted that the reservations mentioned in the introduction regarding supervision and feedback certainly cannot be placed solely on the shoulders of the physician trainees. As indicated throughout the paper, trainers also face the question of how they are going to handle a situation and which obstacles may exist on their part when attempting to implement ways to give supportive feedback. This needs to be looked at more closely in further studies. And finally, considering the always situated and context-dependent experiences of the trainees, the question arises to what extent the findings from the pediatric context can be generalized to other medical areas or even to feedback processes as a whole. Adherence to ethical standards The authors state that there is no conflict of interest. The ethical standards were adhered to. The studies were carried out in compliance with national law and the Declaration of Helsinki from 1975 (in the current revised version). GDPR-compliant consent to the collection of all data is available. Funding The accompanying research project on learning processes in the network for post-graduate education in pediatrics received funding from the Schleswig-Holstein Ministry for Social Affairs, Health, Youth, Family, and Seniors ( Ministerium für Soziales, Gesundheit, Jugend, Familie und Senioren des Landes Schleswig-Holstein ) and the PaedNetz-Mittelfranken e.V. Authors’ ORCIDs Irene Somm: [ 0000-0001-9969-461X ] Marco Hajart: [ 0000-0003-3537-5868 ] Folkert Fehr: [ 0000-0001-8495-4167 ] The authors state that there is no conflict of interest. The ethical standards were adhered to. The studies were carried out in compliance with national law and the Declaration of Helsinki from 1975 (in the current revised version). GDPR-compliant consent to the collection of all data is available. The accompanying research project on learning processes in the network for post-graduate education in pediatrics received funding from the Schleswig-Holstein Ministry for Social Affairs, Health, Youth, Family, and Seniors ( Ministerium für Soziales, Gesundheit, Jugend, Familie und Senioren des Landes Schleswig-Holstein ) and the PaedNetz-Mittelfranken e.V. Irene Somm: [ 0000-0001-9969-461X ] Marco Hajart: [ 0000-0003-3537-5868 ] Folkert Fehr: [ 0000-0001-8495-4167 ] The authors declare that they have no competing interests. Dataset Interview guidelines Detailed sequential analysis 1 of supervision/feedback using SH 6 as an example Supervision and feedback in pediatric primary care
Estimation of the benefit from pre‐emptive genotyping based on the nationwide cohort data in South Korea
8657f100-0ef7-42fa-b41a-230b0639ae9e
10949179
Pharmacology[mh]
The genetic variant is a major cause of individual differences in drug efficacy and safety. One example is the relationship between cytochrome P450 enzyme (CYP) 2C19 and clopidogrel, where CYP2C19 loss‐of‐function variants are strongly linked to therapeutic failure. Another example involves TPMT and NUDT15 variants, where severe myelosuppression can lead to discontinuation of azathioprine in the treatment of inflammatory bowel disease. To date, ~400 pharmacogenomic variants have been included in the US Food and Drug Administration (FDA) labels. Identifying these variants is expected to optimize drug outcomes by reducing adverse events (AEs) and maximizing efficacy. However, despite the wealth of gene‐drug information available, implementing genotyping in clinical practice is challenging. One major issue is the selection of actionable genes. A study comparing pharmacogenomic guidance from various committees (including the Clinical Pharmacogenomics Implementation Consortium [CPIC], and Dutch Pharmacogenetics Working Group [DPWG]) found that only 18% of the cases agreed. Another study highlighted that genes related to drug metabolism and transporters were less actionable (~30%) compared to molecular targets related to oncology (~70%). These issues pose barriers to the implementation of genotyping in clinical settings. Once actionable genes are selected, the implementation of genotyping becomes another concern. Previously, point‐of‐care genotyping was commonly performed, evaluating a specific gene at the point of prescription. However, the decreasing cost of genotyping now enables pre‐emptive genotyping for multiple actionable genes. This approach is more cost‐effective compared to single gene assays. A model‐based study evaluating the cost‐effectiveness of pharmacogenomic panel testing for cardiovascular diseases ( CYP2C19 , CYP2C9 , VKORC1 , and SLCO1B1 genes) demonstrated that the pre‐emptive approach is more cost‐effective than reactive genotyping. Genetic distribution, drug exposure, and demographic structure of a population are important for implementing pre‐emptive genotyping. Several pharmacogenomic variants, such as CYP2C19 and HLA , are associated with race and biogeographic ancestry. Asians have a higher frequency of loss‐of‐function alleles of CYP2C19 than other ancestries (55%), which renders a higher possibility in Asians. In addition, HLA–B*1502 found in Han Chinese was associated with a 2500‐fold higher risk of Stevens‐Johnson syndrome. The different frequencies of high‐risk alleles according to population require evaluation based on a specific population. Drug exposure is closely related to prescription patterns of a country and is evaluated through electronic medical records or insurance data. Demographic structure is associated with drug exposure and certain subpopulations could benefit more from pre‐emptive genotyping. For example, whereas only 11.2% of patients aged less than 13 years are exposed to drugs recommended for genotyping in CPIC and DPWG (e.g., clopidogrel and warfarin), 50.6% of patients aged greater than 65 years are exposed to these drugs. In this study, we aimed to estimate the benefits of pre‐emptive genotyping with a focus on preventing serious AEs (SAEs). We used nationwide cohort data to estimate population‐level benefits in the Korean population. Based on these results, we suggest optimal strategies for implementing pre‐emptive genotyping. Selection of gene‐drug combination and corresponding SAEs Gene‐drug combinations commonly recommended by the CPIC and the DPWG as of February 2022 were reviewed. SAEs that could be prevented by genotyping were evaluated and summarized into representative AEs per gene‐drug combination. SAEs for each drug were either treatment failures or toxicities of clinical significance. For each selected AE, high‐risk phenotypes and the relative risk to the reference phenotype were identified. The frequency of the phenotype in the Korean population was calculated using the PharmGKB genotype frequency database. Genotype frequency in the East Asian population was alternately used when Korean data were not available. Collection of estimation of risk reduction from genotyping Relative risks (RRs) of SAEs in a gene‐drug pair were collected from large‐scale randomized controlled trials or meta‐analyses. Combined with the exposure to risk factors (frequency of high‐risk genotype in the Korean population, p ), RR was converted into population‐attributable risk (PAR) using Levin's formula: PAR = p RR − 1 p RR − 1 + 1 ( p : exposure of the risk factor in the population; RR: relative risk) , PAR was interpreted as the proportion of SAEs that were attributable to high‐risk genotype (RR > 1) or could be prevented by pre‐emptive genotyping (RR < 1). The prevalence of drug‐specific SAEs was obtained from literature. Estimation of average healthcare reimbursement costs SAEs in a gene‐drug pair were matched with the closest diagnosis codes from the International Classification of Diseases (ICD). The average healthcare reimbursement costs were obtained from the Health Insurance Review and Assessment Service Statistics for 2021. If an SAE corresponded to two or more probable ICD codes, the average cost of each ICD code was used. Estimation of drug exposure Drug exposure in the Korean population was estimated from the National Health Insurance Sharing Service National Sample Cohort database consisting of 1,108,369 sample patients (2% of the total population) stratified by demographics. The exposure to a drug was defined as a proportion of patients who were prescribed the drug at least once during the study interval (between 2002 and 2019). Drug exposure was calculated in total and subgroups stratified by sex and age (5‐year intervals). The study was exempt from human subject review by the Institutional Review Board of Seoul National University Bundang Hospital (IRB no. X‐1907‐552‐903). Statistical analyses were conducted using SAS version 9.4 (SAS Institute) and R version 4.2.2 (R Core Team, Vienna, Austria). Estimation of the population benefit of pre‐emptive genotyping Individual benefit per genotyping was calculated as the following formula: Benefit per genotyping = PAR × SAE prevalence × Average healthcare reimbursement cost . The benefit of preemptive genotyping in a specific group (i.e., sex and age groups) was calculated as the product of drug exposure in a specific group and the individual benefit per genotyping. To compare the benefits with genotyping cost, healthcare reimbursement costs for genotyping each gene were obtained from the Health Insurance Review and Assessment Service reimbursement costs in 2021. Gene‐drug combinations commonly recommended by the CPIC and the DPWG as of February 2022 were reviewed. SAEs that could be prevented by genotyping were evaluated and summarized into representative AEs per gene‐drug combination. SAEs for each drug were either treatment failures or toxicities of clinical significance. For each selected AE, high‐risk phenotypes and the relative risk to the reference phenotype were identified. The frequency of the phenotype in the Korean population was calculated using the PharmGKB genotype frequency database. Genotype frequency in the East Asian population was alternately used when Korean data were not available. Relative risks (RRs) of SAEs in a gene‐drug pair were collected from large‐scale randomized controlled trials or meta‐analyses. Combined with the exposure to risk factors (frequency of high‐risk genotype in the Korean population, p ), RR was converted into population‐attributable risk (PAR) using Levin's formula: PAR = p RR − 1 p RR − 1 + 1 ( p : exposure of the risk factor in the population; RR: relative risk) , PAR was interpreted as the proportion of SAEs that were attributable to high‐risk genotype (RR > 1) or could be prevented by pre‐emptive genotyping (RR < 1). The prevalence of drug‐specific SAEs was obtained from literature. SAEs in a gene‐drug pair were matched with the closest diagnosis codes from the International Classification of Diseases (ICD). The average healthcare reimbursement costs were obtained from the Health Insurance Review and Assessment Service Statistics for 2021. If an SAE corresponded to two or more probable ICD codes, the average cost of each ICD code was used. Drug exposure in the Korean population was estimated from the National Health Insurance Sharing Service National Sample Cohort database consisting of 1,108,369 sample patients (2% of the total population) stratified by demographics. The exposure to a drug was defined as a proportion of patients who were prescribed the drug at least once during the study interval (between 2002 and 2019). Drug exposure was calculated in total and subgroups stratified by sex and age (5‐year intervals). The study was exempt from human subject review by the Institutional Review Board of Seoul National University Bundang Hospital (IRB no. X‐1907‐552‐903). Statistical analyses were conducted using SAS version 9.4 (SAS Institute) and R version 4.2.2 (R Core Team, Vienna, Austria). Individual benefit per genotyping was calculated as the following formula: Benefit per genotyping = PAR × SAE prevalence × Average healthcare reimbursement cost . The benefit of preemptive genotyping in a specific group (i.e., sex and age groups) was calculated as the product of drug exposure in a specific group and the individual benefit per genotyping. To compare the benefits with genotyping cost, healthcare reimbursement costs for genotyping each gene were obtained from the Health Insurance Review and Assessment Service reimbursement costs in 2021. Gene‐drug‐event combinations and the corresponding healthcare reimbursement costs A total of 95 gene‐drug pairs were recommended in the CPIC guideline, among which 35 gene‐drug pairs were commonly recommended in the DPWG guideline. Single representative SAEs were extracted for 34 gene‐drug pairs with the highest evidence level in the CPIC guideline, except for clopidogrel, where two SAEs (relapse of myocardial infarction and major bleeding) were selected. A total of 36 gene‐drug‐event combinations with recommendations were finally identified. CYP2C19 had the greatest number of gene‐drug‐event combinations (11 combinations), followed by CYP2D6 (10 combinations; Table ) Excluding gene‐drug‐event combinations where RRs or PARs were not identified, 31 gene‐drug‐event combinations were included in the analysis (Table ). A summary of the SAEs matched to the closest diagnostic codes and healthcare reimbursement costs are presented in Table . Drug exposure Exposure to actionable drugs according to sex and age group is presented in Figure and Table . Patients using fluorouracil, pantoprazole, or phenytoin were not included in the sample cohort database due to low prescription rate or obsolete/misclassified codes, and therefore drug exposure was not estimated. Drug exposure was generally higher in female patients than in male counterparts. Drug exposure increased with age until the maximum exposure was in the age group of 65–70 years. Drug exposure decreased in age groups higher than 70 years. Among male patients, tramadol (40.1%), lansoprazole (14.2%), and omeprazole (11.5%) were the most frequently used, whereas, in women, tramadol (44.3%), lansoprazole (16.5%), and amitriptyline (14.1%) were the most frequently used. Tramadol presents the most frequently used drug in the entire population. Population benefit for pre‐emptive genotyping Overall, CYP2D6 and CYP2C19 showed the greatest benefits in both, male and female patients (Table ). The age group of 65–70 years had the largest benefit for male ($84.40) and female ($100.90) patients from pre‐emptive genotyping (Table and Figure ). Healthcare reimbursement costs for genotyping are listed in Table . Genotyping costs for each genotype ranged from $100.80 to $210.00. A total of 95 gene‐drug pairs were recommended in the CPIC guideline, among which 35 gene‐drug pairs were commonly recommended in the DPWG guideline. Single representative SAEs were extracted for 34 gene‐drug pairs with the highest evidence level in the CPIC guideline, except for clopidogrel, where two SAEs (relapse of myocardial infarction and major bleeding) were selected. A total of 36 gene‐drug‐event combinations with recommendations were finally identified. CYP2C19 had the greatest number of gene‐drug‐event combinations (11 combinations), followed by CYP2D6 (10 combinations; Table ) Excluding gene‐drug‐event combinations where RRs or PARs were not identified, 31 gene‐drug‐event combinations were included in the analysis (Table ). A summary of the SAEs matched to the closest diagnostic codes and healthcare reimbursement costs are presented in Table . Exposure to actionable drugs according to sex and age group is presented in Figure and Table . Patients using fluorouracil, pantoprazole, or phenytoin were not included in the sample cohort database due to low prescription rate or obsolete/misclassified codes, and therefore drug exposure was not estimated. Drug exposure was generally higher in female patients than in male counterparts. Drug exposure increased with age until the maximum exposure was in the age group of 65–70 years. Drug exposure decreased in age groups higher than 70 years. Among male patients, tramadol (40.1%), lansoprazole (14.2%), and omeprazole (11.5%) were the most frequently used, whereas, in women, tramadol (44.3%), lansoprazole (16.5%), and amitriptyline (14.1%) were the most frequently used. Tramadol presents the most frequently used drug in the entire population. Overall, CYP2D6 and CYP2C19 showed the greatest benefits in both, male and female patients (Table ). The age group of 65–70 years had the largest benefit for male ($84.40) and female ($100.90) patients from pre‐emptive genotyping (Table and Figure ). Healthcare reimbursement costs for genotyping are listed in Table . Genotyping costs for each genotype ranged from $100.80 to $210.00. The benefits of pre‐emptive genotyping are difficult to estimate because they require various assumptions. For example, it is difficult to estimate the probability of a patient taking a specific drug. In addition, the allele frequency of high‐risk phenotypes and the prevalence of SAEs can vary among different populations. Different healthcare systems affect the cost of SAEs, which requires country‐specific analyses. We used nationwide cohort data in Korea to provide objective estimates required for a cost–benefit analysis. Because the healthcare system in Korea is centralized by the national insurance system, data can represent the total population in Korea. It could be advantageous over using a hospital‐based electronic health system data, which is generally susceptible to selection bias, to estimate drug exposure. Furthermore, the data were collected longitudinally, providing an estimate of population‐level drug exposure. Healthcare reimbursement costs can vary among healthcare systems despite the same diagnostic code. The situation is exemplified in the comparison between Korea and the United States. The healthcare system in Korea is established on the National Healthcare Insurance, where most patients were mandatorily enrolled in the system. In contrast, the healthcare system in the United States is highly dependent on private insurance. The difference has resulted in the remarkable difference for the healthcare reimbursement costs for several common SAEs. For acute myocardial infarction, the average healthcare reimbursement cost was $2739 in Korea (Table , 2021), whereas it was $15,000 in the United States. Similarly, for acute hemorrhagic cerebrovascular disease (subarachnoid and intracerebral hemorrhages), the costs were estimated to $6653–8434 in Korea, whereas it was $24,800 in the United States. Although a detailed evaluation of the treatment process is required, these cases support the idea that the estimation of benefits should be performed within the context of each healthcare system. We found that sex and age were associated with drug exposure, which may have affected the benefits of genotyping. Elderly patients (≥65 years) can be mostly benefited from genotyping, which is consistent with the tendency of high‐cost users of prescription drugs. The benefit of pre‐emptive genotyping was similarly emphasized in the elderly due to polypharmacy. , Additionally, the frequency of high‐risk phenotypes can be a crucial factor in estimating costs. For example, the low frequency of high‐risk phenotypes in DPYD (0.003) and TPMT (0.0003) in the Korean population limits the overall benefits of pre‐emptive genotyping despite the high cost of SAEs. Of note, there could be a discrepancy between the estimated drug exposure in our analysis and that of the entire patient population. We found that drug exposure of the several drugs was omitted due to low prescription rate (phenytoin and fluorouracil) in the sample population. In addition, obsolete or misclassified drug codes might result in the unexpected omission of pantoprazole in the analysis. We estimated that the cost saving for CYP2C19‐pantoprazole‐peptic ulcer bleeding was $5.00, whereas the corresponding values were $12.50 for lansoprazole and $13.80 for omeprazole before applying drug exposure. Therefore, considering the expected drug exposure of pantoprazole (almost double the lansoprazole or omeprazole ) in the patient population, the cost savings for pantoprazole would be similar to those of omeprazole or lansoprazole. In addition, drug exposure can be estimated differently according to the definition and source of data. Representatively, drug exposure in pediatric population is highly variable according to literature. A retrospective study in an academic children's hospital estimated that 49.3% of pediatric patients were diagnosed with genotype‐associated diseases, among whom 30.9% were prescribed actionable drugs. Another study estimated 1.3% of annual exposure for pediatric patients, which would yield different value for cumulative estimates. Therefore, the definition and source of data must be accounted for estimating the potential benefits from genotyping. Another issue in estimating benefits is the heterogeneous risk reduction when genotyping. A similar issue was addressed previously, as most pharmacoeconomic studies focused on single‐gene genotyping. Actually, the amount of risk reduction from genotyping was reported only in a few gene‐drug‐event cases, and the methods used were highly heterogeneous. We attempted to solve this issue using PAR to provide a standardized and quantitative estimate of preventable risk and integrate PAR with the prevalence obtained separately. The approach enabled more efficient use of reported values to estimate the benefit of genotyping. It is noteworthy that current healthcare reimbursement costs for genotyping are higher than the calculated benefits from pre‐emptive genotyping. The results should be carefully interpreted with the costing methods for genotyping. Genotyping costs are divided into direct and indirect costs. Direct costs include consumables and reagents, whereas indirect costs are associated with infrastructure costs, such as facilities, administrative costs, and maintenance fees. We suppose that current healthcare reimbursement costs might reflect individual‐level total genotyping costs, which could be reduced when a population‐level multiple genotyping strategy is adopted. Therefore, the results should be cautiously interpreted, and further investigation into the costs according to various genotyping strategies is required. Our results support the potential benefits of preemptive genotyping. As we restricted the estimation of the benefit only to commonly recommended gene‐drug pairs in two guidelines and only included AEs in which the medical cost is quantifiable, the actual benefit from genotyping would be higher. In addition, the benefit estimated in our study only included direct medical costs for brevity, which would be increased, including the indirect medical cost (e.g., transportation expenses and expenses related to changing jobs ) as recommended in pharmacoeconomic evaluation. For example, the indirect cost of acute myocardial infarction in Korea accounted for 42.3% of the total costs in 2012. Therefore, the calculated benefit in our study would be the minimal estimate but could suggest implementing cost‐effective genotype panels. The cost‐effectiveness of pre‐emptive genotyping compared to reactive genotyping is still on debate. Reactive genotyping is easier to perform and has already shown cost‐effectiveness. However, the strategy is restricted to specific gene‐drug pairs and application to other drugs is limited. In contrast, pre‐emptive genotyping has advantage in comprehensive understanding of multiple gene‐drug pairs, and would be more efficient when multiple genes were evaluated in a single panel. We performed an additional illustrative test with two major genes (CYP2C19 and CYP2D6) in our study. We obtained the combined distribution of CYP2C19 and CYP2D6 from the previous study in 1003 Japanese patients. To evaluate whether the frequency of high‐risk phenotypes for CYP2C19 and CYP2D6 were independent, or separate genotyping would suffice, we performed an independence test (Table ). We found that the distribution of high‐risk phenotypes of CYP2C19 and CYP2D6 were not independent, which implied testing two genes together could be an efficient way to identify high‐risk phenotypes rather than separately. The preliminary results from the analysis could support the rationale for pre‐emptive genotyping. Our study had several limitations. We simplified benefits to include only the prevention of SAEs other than mild‐to‐moderate AEs, which could have underestimated the actual benefits. Drug exposure estimates do not consider the indication for a drug and may oversimplify the real‐world situation. In addition, the long‐term benefits of genotyping need to be evaluated in terms of improving the quality of life. The estimation of benefits was based on the reported allele frequency and insurance costs in the Korean population, which can limit the application of the results to other countries. The prevalence of SAEs can vary among populations and requires further investigation. More comprehensive prospective studies are required to investigate the economic value of pre‐emptive genotyping in South Korea. In conclusion, pre‐emptive genotyping can yield measurable benefits in preventing SAEs within the healthcare system. Considering drug exposure and genotyping distribution, genotyping CYP2D6 and CYP2C19 in the age group of 65–70 years would result in the greatest benefits, estimated at least $84.40–100.90 per individual. K.Y.H., S.H., J.Y.N., K.‐S.Y., I.‐J.J., J.‐Y.C., and S.Y. wrote the manuscript. K.Y.H., S.H., J.Y.N., and S.Y. designed the research. K.Y.H., S.H., and J.Y.N. performed the research. K.Y.H., S.H., J.Y.N., and S.Y. analyzed the data. This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF‐2019R1C1C1006688) and the Seoul National University Bundang Hospital Research Fund (14‐2020‐0040). The authors declared no competing interests for this work. Data S1
Cleansing efficacy of an auto-cleaning device versus an oscillating- rotating toothbrush in home use. A pilot study in individuals with down syndrome
e058c862-bcc7-4aa8-82e4-e23861864972
11811454
Dentistry[mh]
Down syndrome (DS) or trisomy 21 is the most frequent form of developmental intellectual delay caused by triplicate state of all or a critical portion of chromosome 21 . Its main clinical features include mental impairment and characteristic facies, hypotonic musculature, congenital defects of the heart and gastrointestinal tract, neurobiological alterations, respiratory diseases, and a significantly higher risk of developing infection . Characteristic oral features of individuals with DS are a mild mandibular prognathism and a hypoplastic maxilla, macroglossia, microdontia, short roots, tooth agenesis, delayed tooth eruption, dry mouth due to mouth breathing and lack of lip sealing . A recent clinical study compared oral health characteristics of children and adolescents with DS to an age-matched control group, showing increased rates of gingival inflammation and a greater number of severe malocclusion whereas there is conflicting evidence on a decreased or increased caries prevalence . A significant association between DS and periodontitis with an odds ratio of 3.93 (95% CI 1.81–8.53) and significantly higher probing depths in individuals with DS as compared to controls was shown in a recent meta-analysis including eleven clinical studies . The high prevalence and severity of periodontitis cannot simply be attributed to poor oral hygiene but is based on abnormalities in both the innate and adaptive immune systems, including mild to moderate T-cell lymphopenia, reduced antibody responses, and impairments in chemotaxis and neutrophils phagocytosis . Due to intellectual disablement and an impaired motor function most individuals with DS largely depend on their caretakers’ support or supervision, also when it comes to domestic oral healthcare . The plaque reducing efficiency of toothbrushing, both with manual and electric toothbrushes, largely depends on the brusher’s understanding, motivation, and dexterity. In the general population, a small but statistically significant superiority of powered over manual toothbrushes was found for dental biofilm removal , whereas two recent systematic reviews found no significant differences between manual and powered toothbrushing in people with physical or intellectual disabilities regarding plaque removal or gingival health . This applied to both self-brushing and caregiver brushing. Lately, automatic toothbrushes acquired vogue. Pre-eminently designed to accommodate neglectful toothbrushers by simple handling and a reduced brushing time through simultaneous cleaning of all teeth per jaw or mouth, automatic toothbrushing devices have so far not conclusively met expectations concerning plaque removal. A clinical study testing the auto-cleaning device Amabrush® (Vienna, Austria) assessed an insufficient fit of the horse-shoe shaped mouthpiece with diverse dental arches and an inappropriate bristle alignment resulting in poorer plaque removal as compared to uninstructed manual brushing in young healthy volunteers . Another recent study compared the cleansing efficacy of the auto-cleaning device Y-brush® (Lyon, France) with that of manual toothbrushing in a single brushing exercise in 20 healthy probands . Full-mouth plaque reduction was significantly lower with auto-cleaning for 5 s per jaw than with manual toothbrushing with a statistical significance in marginal and interdental sites but not in smooth tooth surfaces. Increasing the brushing time to 15 s per jaw resulted in a full-mouth plaque reduction comparable to manual toothbrushing ( p = 0.177). Statie et al. showed in a crossover randomized trial that the Y-brush (10 s per jaw) was significantly more effective than no brushing (negative control) but less effective than manual toothbrushing in dental plaque removal . Authors of both studies concluded that the auto-cleaning device might be recommendable for individuals with low dexterity. For individuals with intellectual disabilities and/or impaired motorfunction and a resulting poor plaque removal competence through conventional toothbrushing, an automatic toothbrushing device might constitute a beneficial tooth cleaning modality and promote self-reliance. Therefore, the aim of the present randomized and single-blinded cross-over study was to compare the cleansing efficacy of the auto-cleaning device Y-brush® with that of the oscillating-rotating toothbrush Oral-B® Pro 3 3000 in unassisted domestic use over a period of four weeks by persons with DS. The null hypothesis was that there would be no difference in cleansing efficacy between the two brushing modalities. The present study was conducted in accordance with the 1964 Declaration of Helsinki and its later amendments. Ethical approval was obtained by the Ethics Committee of the Medical University of Innsbruck, study number 1108/2022. The study was registered at the registry for clinical studies of the University Hospital of Innsbruck (Koordinationszentrum für klinische Studien; [email protected]), registration number 20221123–3057. Legal guardians of all participants and all participants themselves gave their informed written consent prior to the study enrollment. Subjects Sample size calculation (please see statistical analysis ) determined a sample size of nine. To increase the validity of this study, we aimed to include a sample of 20 individuals. Inclusion criteria comprised 1) the diagnosis of DS, 2) minimum age of 18 years, 3) presence of at least ten teeth per jaw and of at least four teeth per quadrant, 4) community periodontal index of treatment needs (CPITN) grade 1 or 2. Exclusion criteria were 1) pregnancy or breastfeeding, 2) concurrent participation in another study, 3) presence of decayed teeth, and 4) presence of active orthodontic appliances. Teeth with direct or indirect restorations or dental implants were not excluded. Recruitment of participants was carried out from December 2021 to November 2022 by means of a circular sent out by the Down Syndrome Association Tyrol (Verein Down-Syndrom Tirol). Clinical intervention The cleansing efficacy of brushing with the auto-cleaning device Y-brush® versus a rotating-oscillating toothbrush was evaluated in a randomized, examiner-blinded, two-period crossover study. Each subject was invited to attend four appointments (Fig. ). At appointment one, each subject and his/her legal guardian were informed about the study procedure and signed an informed consent form after confirmation of in- and exclusion criteria. At baseline, the gingival bleeding index (GBI; Ainamo and Bay) was assessed at six sites per tooth . After plaque disclosing by applying the solution (2Tone, Young, Earth City, Mo, USA) with a sponge pellet and rinsing thoroughly with water, the Rustogi Modified Navy Plaque Index (RMNPI) was assessed by one blinded and trained examiner. Professional tooth cleaning was accomplished with sonic/ultrasound or manual scalers (in case of present calculus) and rubber cups with polishing paste (Cleanic®, Kerr, Bioggo, CH). Each participant was instructed in the use of the respective toothbrushing device that was allocated by computer-generated randomization, and provided with toothpaste (Sensodyne PRO Schmelz, GlaxoSmithKline Pharma GmbH, Vienna, Austria). Participants’ allocation to group 1 (“Y-brush® first”) or 2 (“Oral-B® Pro 3 3000 first”) was accomplished prior the study by Excel (Microsoft, Redmont, Washington, USA) with the random function Zufallszahl. Twice daily use of the assigned brush and refraining from the adjunct use of any other toothbrush, toothpaste, any interdental cleaning device, mouthrinse, or chewing gum were commanded. Caretakers were instructed not to assist the brushing procedure. The second appointment took place after four weeks’ use of the assigned toothbrush. Again, GBI and (after plaque disclosure) RMNPI were scored by the same blinded examiner. After professional tooth cleaning, a one-week wash-out phase was enrolled. Probands were told to resume their accustomed oral hygiene as before the study. At the third appointment, GBI and RMNPI scores were taken by the same examiner and the handling of the other toothbrush to be tested was instructed. The fourth appointment included blinded GBI and RMNPI assessment and final professional cleaning. Y-brush® The horseshoe shaped flexible mouthpiece has been designed to clean all teeth of one jaw at the same time (Fig. A). It is on one side mounted with nylon bristles, which are inclined at a 45° angle to the gums to mimic the BASS technique. In this study, size M for adults, which is designated for age 12 plus, was used. Probands were instructed to wet the mouthpiece, load it with toothpaste by use of the supplied silicone toothpaste applicator, to insert and adjust it to the upper dental arch to ensure maximum fit, and to use the 15 s mode. After starting the cleaning process by pressing the start button, the mouthpiece should gently and quickly be chewed on and moved by the handle to the left and right side according to the manufacturer’s user manual . The procedure was to be repeated in the lower jaw by turning the mouthpiece around with bristles and operating button facing downward and placing it onto the lower dental arch. Oral-B® pro 3 3000 The second toothbrush used in this study was the rotating-oscillating brush Oral-B® Pro 3 3000 with the brush head Oral-B® CrossAction (both Oral B, Procter & Gamble, Cincinnati, Ohio, USA) (Fig. B), which is mounted with about 2.200 bristles in a 16-degree angle. Probands were instructed to use the mode “daily cleansing”. They were briefed to brush systematically tooth by tooth occlusal sites first, oral sites second, and vestibular sites third in each jaw, placing the brush head at right angles onto the respective site and keeping it there for three seconds. Paying attention to the green light of the brush’s contact pressure control should ensure adherence to the correct pressure. Rustogi modified navy plaque index The index divides buccal and lingual surfaces into nine areas (A to I) that are scored for the presence (score = 1) or absence (score = 2) of plaque. It assesses the amount of plaque on a nine surface basis (areas A-I), smooth surface basis (areas E, G, H, and I), interdental surface basis (areas D and F), and the gingival margin surface basis (areas A-C). Third molars were excluded from the evaluation, whereas teeth with fillings, inlays, onlays, or crowns were included. RMNPI is calculated as percentage of biofilm adhering sites to measured sites. Gingival bleeding index (ainamo and bay) The gingival bleeding index (GBI) was determined by moving a periodontal probe within the gingival sulcus from the middle of the respective tooth towards the mesial and distal papilla. After 30 s, the presence or absence of bleeding was recorded in a dichotomous manner at six sites per tooth: mesio-buccal, buccal, disto-buccal, mesio-lingual, lingual, and disto-lingual. GBI was calculated as percentage of bleeding sites to evaluated sites. Statistical analysis Sample size calculation was based on mean values and standard deviations of whole-mouth RMNPI scores provided by . In that clinical investigation the cleansing efficacy of the Y-Brush® was compared with that of uninstructed manual toothbrushing in periodontally healthy individuals without disabilities in a single brushing exercise. Mean RMNPI after brushing was 13.04 ± 5.18% for manual toothbrushing and 29.8 ± 10.17% for the auto-cleaning device. Based on these data, sample size calculation for dependent samples, a power of 90% and α = 0.01 revealed a sample size of nine for the present investigation. For descriptive analysis and if not stated otherwise, median and range are given. On a participant-level, RMNPI values were calculated as the total number of tooth areas with plaque present divided by the total number of tooth areas scored. An analogous calculation of the GBI was performed. The main outcome of whole-mouth RMNPI as well as GBI scores were compared between the two toothbrushing procedures using the Wilcoxon signed-rank test. The statistical analysis was conducted using IBM SPSS Statistics V.29.0.0.0 (IBM Armonk; NY, USA. The significance level was p < 0.05, with a power of 80%. Sample size calculation (please see statistical analysis ) determined a sample size of nine. To increase the validity of this study, we aimed to include a sample of 20 individuals. Inclusion criteria comprised 1) the diagnosis of DS, 2) minimum age of 18 years, 3) presence of at least ten teeth per jaw and of at least four teeth per quadrant, 4) community periodontal index of treatment needs (CPITN) grade 1 or 2. Exclusion criteria were 1) pregnancy or breastfeeding, 2) concurrent participation in another study, 3) presence of decayed teeth, and 4) presence of active orthodontic appliances. Teeth with direct or indirect restorations or dental implants were not excluded. Recruitment of participants was carried out from December 2021 to November 2022 by means of a circular sent out by the Down Syndrome Association Tyrol (Verein Down-Syndrom Tirol). The cleansing efficacy of brushing with the auto-cleaning device Y-brush® versus a rotating-oscillating toothbrush was evaluated in a randomized, examiner-blinded, two-period crossover study. Each subject was invited to attend four appointments (Fig. ). At appointment one, each subject and his/her legal guardian were informed about the study procedure and signed an informed consent form after confirmation of in- and exclusion criteria. At baseline, the gingival bleeding index (GBI; Ainamo and Bay) was assessed at six sites per tooth . After plaque disclosing by applying the solution (2Tone, Young, Earth City, Mo, USA) with a sponge pellet and rinsing thoroughly with water, the Rustogi Modified Navy Plaque Index (RMNPI) was assessed by one blinded and trained examiner. Professional tooth cleaning was accomplished with sonic/ultrasound or manual scalers (in case of present calculus) and rubber cups with polishing paste (Cleanic®, Kerr, Bioggo, CH). Each participant was instructed in the use of the respective toothbrushing device that was allocated by computer-generated randomization, and provided with toothpaste (Sensodyne PRO Schmelz, GlaxoSmithKline Pharma GmbH, Vienna, Austria). Participants’ allocation to group 1 (“Y-brush® first”) or 2 (“Oral-B® Pro 3 3000 first”) was accomplished prior the study by Excel (Microsoft, Redmont, Washington, USA) with the random function Zufallszahl. Twice daily use of the assigned brush and refraining from the adjunct use of any other toothbrush, toothpaste, any interdental cleaning device, mouthrinse, or chewing gum were commanded. Caretakers were instructed not to assist the brushing procedure. The second appointment took place after four weeks’ use of the assigned toothbrush. Again, GBI and (after plaque disclosure) RMNPI were scored by the same blinded examiner. After professional tooth cleaning, a one-week wash-out phase was enrolled. Probands were told to resume their accustomed oral hygiene as before the study. At the third appointment, GBI and RMNPI scores were taken by the same examiner and the handling of the other toothbrush to be tested was instructed. The fourth appointment included blinded GBI and RMNPI assessment and final professional cleaning. The horseshoe shaped flexible mouthpiece has been designed to clean all teeth of one jaw at the same time (Fig. A). It is on one side mounted with nylon bristles, which are inclined at a 45° angle to the gums to mimic the BASS technique. In this study, size M for adults, which is designated for age 12 plus, was used. Probands were instructed to wet the mouthpiece, load it with toothpaste by use of the supplied silicone toothpaste applicator, to insert and adjust it to the upper dental arch to ensure maximum fit, and to use the 15 s mode. After starting the cleaning process by pressing the start button, the mouthpiece should gently and quickly be chewed on and moved by the handle to the left and right side according to the manufacturer’s user manual . The procedure was to be repeated in the lower jaw by turning the mouthpiece around with bristles and operating button facing downward and placing it onto the lower dental arch. The second toothbrush used in this study was the rotating-oscillating brush Oral-B® Pro 3 3000 with the brush head Oral-B® CrossAction (both Oral B, Procter & Gamble, Cincinnati, Ohio, USA) (Fig. B), which is mounted with about 2.200 bristles in a 16-degree angle. Probands were instructed to use the mode “daily cleansing”. They were briefed to brush systematically tooth by tooth occlusal sites first, oral sites second, and vestibular sites third in each jaw, placing the brush head at right angles onto the respective site and keeping it there for three seconds. Paying attention to the green light of the brush’s contact pressure control should ensure adherence to the correct pressure. ] The index divides buccal and lingual surfaces into nine areas (A to I) that are scored for the presence (score = 1) or absence (score = 2) of plaque. It assesses the amount of plaque on a nine surface basis (areas A-I), smooth surface basis (areas E, G, H, and I), interdental surface basis (areas D and F), and the gingival margin surface basis (areas A-C). Third molars were excluded from the evaluation, whereas teeth with fillings, inlays, onlays, or crowns were included. RMNPI is calculated as percentage of biofilm adhering sites to measured sites. ] The gingival bleeding index (GBI) was determined by moving a periodontal probe within the gingival sulcus from the middle of the respective tooth towards the mesial and distal papilla. After 30 s, the presence or absence of bleeding was recorded in a dichotomous manner at six sites per tooth: mesio-buccal, buccal, disto-buccal, mesio-lingual, lingual, and disto-lingual. GBI was calculated as percentage of bleeding sites to evaluated sites. Sample size calculation was based on mean values and standard deviations of whole-mouth RMNPI scores provided by . In that clinical investigation the cleansing efficacy of the Y-Brush® was compared with that of uninstructed manual toothbrushing in periodontally healthy individuals without disabilities in a single brushing exercise. Mean RMNPI after brushing was 13.04 ± 5.18% for manual toothbrushing and 29.8 ± 10.17% for the auto-cleaning device. Based on these data, sample size calculation for dependent samples, a power of 90% and α = 0.01 revealed a sample size of nine for the present investigation. For descriptive analysis and if not stated otherwise, median and range are given. On a participant-level, RMNPI values were calculated as the total number of tooth areas with plaque present divided by the total number of tooth areas scored. An analogous calculation of the GBI was performed. The main outcome of whole-mouth RMNPI as well as GBI scores were compared between the two toothbrushing procedures using the Wilcoxon signed-rank test. The statistical analysis was conducted using IBM SPSS Statistics V.29.0.0.0 (IBM Armonk; NY, USA. The significance level was p < 0.05, with a power of 80%. Subjects Sixteen Caucasians with DS were recruited to participate in this study. One proband dropped out during the second intervention period, because she objected to the unaccustomed automatic cleansing protocol. Ten men and five women (mean age 31 ± 8.33 years) finished the study. Rustogi modified navy plaque index Baseline full-mouth RMNPI scores were not statistically significantly different between the two brushing modalities (Y-brush®: 46.2%; range 6.3 – 75.2; Oral-B® Pro 3 3000: 49.6%; 11.7 – 63.9) ( p = 0.208). After four weeks of twice daily (unassisted) brushing there was no statistically significant difference between the two interventions ( p = 0.484), although RMNPI scores increased to 59.2% (24.8 – 76.7) for brushing with the Y-brush® ( p = 0.024) and to 54.6% (6.4 – 71.3) for brushing with the rotating-oscillating toothbrush ( p = 0.342). There were no statistically significant differences in RMNPI subgroup analyses between the two groups (Table ). For both brushing modalities, baseline and post-brushing RMNPI scores were statistically significantly lower in the upper than in the lower jaw (Fig. ). For both brushing modalities, baseline and post-brushing RMNPI scores were statistically significantly lower in oral than in vestibular sites. Gingival bleeding index Baseline GBI scores were not statistically significantly different between the two brushing modalities (Y-brush®: 5.1%; range 0.6 – 38.1; Oral-B® Pro 3 3000: 8.3%; 2.6 – 23.2) ( p = 0.271). While there was a statistically significant decrease between baseline and post-intervention scores for the rotating-oscillating toothbrush (GBI 4.0% (0.0 – 15.5) ( p = 0.027), there were no significant differences after brushing with the Y-brush (GBI 11.5%; 0.6 – 26.2) ( p = 0.374), both showing a wide range (Fig. ). Sixteen Caucasians with DS were recruited to participate in this study. One proband dropped out during the second intervention period, because she objected to the unaccustomed automatic cleansing protocol. Ten men and five women (mean age 31 ± 8.33 years) finished the study. Baseline full-mouth RMNPI scores were not statistically significantly different between the two brushing modalities (Y-brush®: 46.2%; range 6.3 – 75.2; Oral-B® Pro 3 3000: 49.6%; 11.7 – 63.9) ( p = 0.208). After four weeks of twice daily (unassisted) brushing there was no statistically significant difference between the two interventions ( p = 0.484), although RMNPI scores increased to 59.2% (24.8 – 76.7) for brushing with the Y-brush® ( p = 0.024) and to 54.6% (6.4 – 71.3) for brushing with the rotating-oscillating toothbrush ( p = 0.342). There were no statistically significant differences in RMNPI subgroup analyses between the two groups (Table ). For both brushing modalities, baseline and post-brushing RMNPI scores were statistically significantly lower in the upper than in the lower jaw (Fig. ). For both brushing modalities, baseline and post-brushing RMNPI scores were statistically significantly lower in oral than in vestibular sites. Baseline GBI scores were not statistically significantly different between the two brushing modalities (Y-brush®: 5.1%; range 0.6 – 38.1; Oral-B® Pro 3 3000: 8.3%; 2.6 – 23.2) ( p = 0.271). While there was a statistically significant decrease between baseline and post-intervention scores for the rotating-oscillating toothbrush (GBI 4.0% (0.0 – 15.5) ( p = 0.027), there were no significant differences after brushing with the Y-brush (GBI 11.5%; 0.6 – 26.2) ( p = 0.374), both showing a wide range (Fig. ). A limited ability to accomplish sufficient oral hygiene, along with other restrictions, results in poorer periodontal status conditions and higher unmet caries treatment needs in individuals with intellectual disabilities . Improvement of oral care in disabled individuals is a general necessity to adjust oral health inadequacies . found a comparable plaque reduction of the auto-cleaning device Y-brush® to that of manual toothbrushing, when the brushing time of auto-cleaning was 15 s per jaw. They therefore hypothesized that auto-cleaning devices might be at least recommendable for people with low manual or cognitive skills, who would be expected to be less effective in plaque removal by self-use of manual or conventional powered toothbrushes . Consequently, in the present study we compared the cleansing efficacy of the Y-brush® with that of an oscillating-rotating toothbrush (Oral-B® Pro 3 3000 and the cross-action brush-head) in unassisted domestic use by persons with DS. The formulated null hypothesis was verified through this clinical cross-over investigation. Neither of the tested brushes prevailed over the other regarding post-brushing RMNPI in unassisted (twice daily) use by individuals with DS over four weeks. Moreover, in unassisted use both tested brushes compared poorly with customary daily oral hygiene routinely applied before the study and during the wash-out period (Table ). Participants’ mode of toothbrushing before the onset of the study and during the wash-out period was not recorded in detail, but caretakers were to a varying extent involved in the cleansing procedure in the sense of supervision, assistance, or complete performance. Additionally, a bias cannot be ruled out, as participants and/or caretakers might have paid particular attention to oral hygiene before the study or during washout. The current study was conducted as a pilot study. Although we aimed to include 20 participants, our study population consisted finally of 15 individuals aged 31 ± 8.33 years. A post hoc estimation of the power for P(X > Y) = 0.6 resulted in a power of 0.1559 (Wilcoxon signed rank test). A follow-up study with a power of 80% and a two-sided alpha of 0.5 would have to include 131 subjects. In the present study, we even had unsolvable problems recruiting the targeted 20 test subjects, despite personal contacts to the self-help group. A follow-up study would therefore probably have to be planned as a multicenter study or in a non-dental setting. Social cohesion within an association reflects team spirit and caretakers’ and participants’ willingness to collaborate on collective projects. Thus, a selection bias in the sense of recruitment of participants, who are supported by most diligent caretakers that pursue the best care for their dependents, is probable. The findings of the study can also be interpreted to indicate that the automated toothbrush is equally effective as a rotating-oscillating toothbrushing, with the added benefit of requiring significantly less effort/time. Effective brushing with rotating-oscillating brushes, as commonly recommended and instructed in this study, requires adherence to a predefined sequence and a certain dexterity to place the brush correctly onto each tooth surface. Both intellectual and/or motor function impairment present in individuals with DS might be causes for poor plaque control by use of this kind of powered brush. But also the handling of the auto-cleaning device Y-brush® poses some difficulties, beginning with the loading with a specified amount of toothpaste by means of the provided silicone applicator, which is not firmly enough connectable to the toothpaste tube. According to instructors’ observations, the insertion and adjustment of the mouthpiece to the upper or lower jaw were frequently stymied by macroglossia. One and the same button serves to start the cleansing procedure (first press) and to select the duration of brushing cycle (second press). Operation of this button might have overstrained study participants and consistent use of the demanded 15 s mode is not assured. When – after turning the device from one jaw to the other – the button is faced downwards, its handling is additionally hampered with lacking vision control. During the cleansing process, the mouthpiece should be chewed on to facilitate a certain pressure and at the same time should be manually twisted by the handle to the left and right side in order to sufficiently cover posterior teeth, a procedure maybe too complex for disabled persons and probably not exerted correctly by study participants. The difficulties in handling of the Y-brush® could perhaps be compensated by supervision and/or assistance by caregivers. An increase of brushing time to e.g. 30 s per jaw might to some extent also offset delayed or inadequate auto-cleaning device operation. The aim of this study was to evaluate unassisted use of both tested brushes. It is not assured that either of the tested brushes was used correctly. Special anatomical conditions such as macroglossia and jaw incongruity might be the reason for less plaque accumulation in the upper than in the lower jaw and in oral than in vestibular tooth sites, which was found at baseline as well as at post-brushing assessments with both tested brushes. Before release, participants were asked, which brush they prefer in daily use. Nine participants stated that they preferred the Oral-B® Pro 3 3000 brush over the Y-brush® and six participants favored the auto-cleaning device. The fact that the majority of participants preferred the Oral-B® Pro 3 3000 brush over the auto-cleaning device hints a certain objection to the after all intricate automatic cleaning procedure and a necessity to deskill commercial devices that are intended for the usage by/in persons with intellectual or physical disabilities. The inaccurate fit of pre-manufactured auto-cleaning devices with diverse dental arches and its inadequate bristle alignment, analyzed in study participants’ plaster casts, have already been criticized by . In this study, we refrained from impression taking and plaster cast analysis to keep participants’ strain down and to not compromise cooperation. In individuals with DS, malocclusion, namely class III malocclusion and anterior crossbite, are more prevalent than in the general population . Intra-individual discrepancies of dental arch sizes might additionally limit the use of arbitrary mouthpieces. In very small (upper) jaws the M size mouthpiece may have been too large. Authors claim that individually (per jaw) customized mouthpieces (including tailored bristle alignment) based on dental models might be a promising future approach to enhance plaque removal efficacy of automatic toothbrushing devices. Three-dimensional digital intraoral scanning and the generation of printed or virtual models might facilitate the industrial production of customized mouthpieces, the technical feasibility and economic viability/costs of which are still to be calculated. In the present pilot study neither the auto-cleaning device Y-brush® nor rotating-oscillating toothbrushing reached satisfactory plaque levels in unassisted use by persons with DS. Further studies should investigate the impact of caregivers’ assistance with auto-cleaning devices to persons with disabilities on plaque removal efficacy.
Challenges in diagnosing canine spindle cell tumours using immunohistochemistry, illustrated by three nonpigmented malignant cases from the nictitating membrane
a5cd82d9-012d-43c3-8cf1-c4ca7be78ca1
10893616
Anatomy[mh]
Tumours of the membrana nictitans are quite rare in dogs , the most common being adenomas or adenocarcinomas arising in the membrana nictitans gland . Other tumours recorded are squamous cell carcinomas and papillomas ; melanomas and melanocytomas ; hemangiosarcomas , hemangiomas and angiokeratomas ; leiomyoma ; mast cell tumours ; lymphomas ; plasmacytomas [G. C. Shaw, personal communication, COPLOW, 2019], myoepitheliomas , basal cell carcinomas and complex carcinomas ; a transmissible venereal tumour ; a malignant peripheral nerve sheath tumour ; and a histiocytoma . Melanomas in general are malignant tumours relatively common in dogs, especially the pigmented types located in the skin . Melanomas of the canine conjunctiva are rare in the literature, most of them being pigmented [C. R. Reilly et al., ACVO 2005 abstract no.: 39], making the amelanotic melanomas in the canine nictitating membrane very rare. Melanomas arise from melanocytes which originate from the neural crest , they vary histopathologically from epithelioid cell types to spindle cell types or a mix of both . Earlier publications in humans have shown that UV exposure is a risk factor , but recent research has suggested multiple causes . Risk factors for canine melanomas are still under research, they develop in the same locations as in humans, but there is a strong breed predisposition and overrepresentation in black coated dogs, associated with both UV and non-UV induced pathways . Sarcomas in general are malignant tumours derived from connective tissues. Hemangiosarcomas are composed of neoplastic endothelial cells . In dogs most sarcomas seem to be spontaneous. Earlier publications strongly indicate that UV exposure is a risk factor in conjunctival hemangiosarcomas . There is no breed predisposition for conjunctival hemangiosarcomas, but middle-aged to older, middle to large-size dogs with significant outdoor activity are more commonly affected . Neither is there a sex predisposition , but occurrence of hemangiosarcomas in general is more common in neutered individuals as opposed to those that are intact, indicating a possible hormonal link . Tumours of the nictitating membrane are in general characterized by protrusion of a firm or irregular local mass expanding the nictitating membrane, either on the bulbar side, the palpebral side or on the leading edge of the nictitating membrane . Melanomas in the nictitating membrane are mostly heavily pigmented, though they can be even partly or totally amelanotic . Vascular tumours in this area arise in different shades of pink, most often in nonpigmented areas with conjunctival hyperaemia being present . For vascular tumours a blood blister-like appearance is also quite typical . Early, complete surgical excision is recommended and may be curative, though recurrence is a risk, as malignant spindle cell tumours in general demonstrate aggressive, local or multi-focal invasive tissue involvement. In human oncology the minimum surgical margin to reduce the risk of local recurrence of sarcomas has not yet been clearly defined . In cutaneous melanomas there are more well-established standards recommending 1 to 2 cm margins depending on the thickness of the primary tumour . In the future, optical coherence tomography (OCT) or micrographic surgery may prove a helpful intra-operative tool for visualizing tumour-affected or tumour-free margins in surgery of dogs. The treatment of choice for melanomas and hemangiosarcoma of the nictitating membrane in dogs is surgery. Radiation therapy and systemic chemotherapy has been used with success in melanomas and non-dermal hemangiosarcomas . Immunotherapy is being applied for melanomas . Histopathological variation represents a diagnostic challenge in specifying tumour type as many neoplastic cells have histologic patterns with overlapping features, but the development of immunohistochemistry has improved the diagnostic process of spindle cell tumours and especially melanomas in veterinary pathology . Using a dog-adapted protocol is essential to secure the right diagnosis. Most immunohistochemical protocols are developed for human tissues and must be controlled and adapted to each specific species to avoid false positive or negative results. The aim of this study is to describe three rare cases of malignant macroscopically nonpigmented spindle cell tumours of the canine membrana nictitans and compare to previous publications on this subject. We emphasise the importance of using an immunohistochemistry protocol adapted to dogs. This retrospective study included three canine patients managed clinically by ECVO (European College of Veterinary Ophthalmology) recognised authorized veterinary ophthalmologists. The three malignant spindle cell tumours of the nictitating membrane were diagnosed by co-author SH at the Eye Pathology Section, Copenhagen University Hospital (Rigshospitalet) in Denmark from 2000 to 2023. In twenty-three years only three malignant nonpigmented spindle cell cases have been diagnosed in Scandinavia [S. Heegaard, personal communication, 2023], [R. Grandón, personal communication, BioVet, 2023]. The tumours were further analysed by co-author CNF during 2022 and 2023. This study did not require official or institutional ethical approval. The animals were handled clinically according to high ethical standards and national legislation. Histopathology and immunohistochemistry Archived tissue samples were retrieved in the three cases. All specimens were formalin-fixed and paraffin-embedded (FFPE) and stained with haematoxylin and eosin (H&E), Gram, Trichrome Gomori, Masson trichrome, and periodic acid-Schiff (PAS) according to standard protocols. The FFPE blocks were retrieved and additional, serial 4-µm tissue sections were cut and mounted on slides prior to immunohistochemical staining. Immunohistochemical stains were performed on a Ventana BenchMark ULTRA platform (Ventana Medical Systems Inc., Tucson, AZ, USA) as previously described , according to a human protocol. The following primary antibodies were used: S-100 (Polyclonal, 1:4000 dilution, DAKO A/S, Glostrup, Denmark), Vimentin (clone 3B4, 1:400 dilution, DAKO A/S), Cytokeratin (clone AE1/AE3, 1:200 dilution, DAKO A/S), Smooth muscle actin (SMA) (clone 1A4, 1:500 dilution, DAKO A/S), Glial fibrillary acidic protein (GFAP) (polyclonal, ready-to-use (RTU), DAKO A/S) and Melan-A (clone: A103, 1:100 dilution, DAKO A/S). The Dako Envision Flex system Labelled Polymer Anti-mouse (Dako Agilent, Santa Clara, CA, USA) was used as a secondary antibody according to the manufacturer’s instructions. (Table ). Additionally, Melan-A and PNL2 antibodies were applied to detect melanocytic differentiation using a protocol that has been standardized for use in canine tissues. Briefly, for Melan-A the Dako clone A103 (Dako Agilent, Santa Clara, CA, USA) was used as a primary antibody at a 1:50 dilution. Antigen retrieval was done with pH9 EDTA buffer (Fischer Scientific, Loughborough, Leicestershire, UK). For PNL2, the Santa Cruz Clone PNL2 sc-59,306 (Santa Cruz Biotechnology, Heidelberg, Germany) was used at a 1:100 dilution. Antigen retrieval was done with sodium citrate buffer (Fischer Scientific, Loughborough, Leicestershire, UK). The Dako Envision + system-HRP Labelled Polymer Anti-mouse (Dako Agilent, Santa Clara, CA, USA) was used as a secondary reagent for both primary antibodies. Primary and secondary antibodies were both incubated for 30 min at room temperature and the reactions were visualized using DAB + substrate buffer (and DAB chromogen Dako Agilent, Santa Clara, CA, USA) for 10 min. (Table ). Archived tissue samples were retrieved in the three cases. All specimens were formalin-fixed and paraffin-embedded (FFPE) and stained with haematoxylin and eosin (H&E), Gram, Trichrome Gomori, Masson trichrome, and periodic acid-Schiff (PAS) according to standard protocols. The FFPE blocks were retrieved and additional, serial 4-µm tissue sections were cut and mounted on slides prior to immunohistochemical staining. Immunohistochemical stains were performed on a Ventana BenchMark ULTRA platform (Ventana Medical Systems Inc., Tucson, AZ, USA) as previously described , according to a human protocol. The following primary antibodies were used: S-100 (Polyclonal, 1:4000 dilution, DAKO A/S, Glostrup, Denmark), Vimentin (clone 3B4, 1:400 dilution, DAKO A/S), Cytokeratin (clone AE1/AE3, 1:200 dilution, DAKO A/S), Smooth muscle actin (SMA) (clone 1A4, 1:500 dilution, DAKO A/S), Glial fibrillary acidic protein (GFAP) (polyclonal, ready-to-use (RTU), DAKO A/S) and Melan-A (clone: A103, 1:100 dilution, DAKO A/S). The Dako Envision Flex system Labelled Polymer Anti-mouse (Dako Agilent, Santa Clara, CA, USA) was used as a secondary antibody according to the manufacturer’s instructions. (Table ). Additionally, Melan-A and PNL2 antibodies were applied to detect melanocytic differentiation using a protocol that has been standardized for use in canine tissues. Briefly, for Melan-A the Dako clone A103 (Dako Agilent, Santa Clara, CA, USA) was used as a primary antibody at a 1:50 dilution. Antigen retrieval was done with pH9 EDTA buffer (Fischer Scientific, Loughborough, Leicestershire, UK). For PNL2, the Santa Cruz Clone PNL2 sc-59,306 (Santa Cruz Biotechnology, Heidelberg, Germany) was used at a 1:100 dilution. Antigen retrieval was done with sodium citrate buffer (Fischer Scientific, Loughborough, Leicestershire, UK). The Dako Envision + system-HRP Labelled Polymer Anti-mouse (Dako Agilent, Santa Clara, CA, USA) was used as a secondary reagent for both primary antibodies. Primary and secondary antibodies were both incubated for 30 min at room temperature and the reactions were visualized using DAB + substrate buffer (and DAB chromogen Dako Agilent, Santa Clara, CA, USA) for 10 min. (Table ). Case 1 A 12 kg, 8-year-old intact female Dachshund dog presented with a hematoma-like lesion on the palpebral surface of the right nictitating membrane, the rest of the ophthalmic examination was normal. Initially the hematoma was concluded to be due to trauma but after one week of progressive enlargement, a tumour was suspected. Surgical excision of the mass was performed under general anaesthesia. This mass was not sent for histopathology. The surgical site healed uneventfully but after 1.5 months regrowth was noticed. Another excision was performed under general anaesthesia. Two months later, a further regrowth appeared on the same site. This time the tumour had a multilobulated and cystic appearance (Fig. a). A larger resection of the tumour and the nictitating membrane was performed under general anaesthesia. This resection was sent for histopathology. The dog survived for another 3 years and was then euthanized for reasons unrelated to this disease. At this point another regrowth was observed in the remnants of the nictitating membrane. Necropsy was not performed. (Table ). Case 2 A 37.5 kg, 12-year-old intact female white coated Akita dog presented with a hyperaemic, grossly nonpigmented tumour arising from the palpebral surface of the right nictitating membrane (Fig. a). The ophthalmic examination was otherwise normal. The firm hyperaemic mass and part of the nictitating membrane was resected and sent for histopathological examination. A plain chest radiograph was performed which revealed multiple small nodular densities suspected to be metastases, with age-related changes as a differential diagnosis. A year and a half later, a regrowth of the tumour was observed in the nictitating membrane of the medial canthus of the right eye. An additional chest radiograph at that time showed no further development in the nodular densities. A larger resection of the nictitating membrane including the tumour was performed under general anaesthesia. The histopathological diagnosis confirmed the suspicion of recurrence of the resected tumour. Seven months later the dog had exophthalmos with exotropia and a mass in the medial canthus. A routine blood profile revealed lymphopenia and slightly elevated serum calcium. As the dog had exophthalmos and the owner did not want to euthanize the dog, an exenteration was performed. During this surgery the mass, which was now involving the medial and retrobulbar area of the orbit, was resected. Within four months there was recurrence in the medial aspect of the cutaneous scar. This area was resected, but within one month the dog had developed respiratory signs with coughing and was euthanized. Necropsy was not performed. (Table ). Case 3 A 9 kg, 14-year-old intact female tricolour Shetland Sheepdog presented with an on the surface partly pigmented pendulous tumour on the palpebral surface of the nictitating membrane of the right eye. The tumour was ulcerated which was suspected to be due to self-trauma. The ophthalmological examination was otherwise normal. There was no evidence of systemic disease on the general physical examination. Blood count and routine biochemistry were both normal. The tumour was initially resected under local anaesthesia by a veterinary ophthalmologist. Four months after excision the tumour recurred (Fig. a). The dog was then fully anesthetized and large areas of the nictitating membrane were surgically removed and sent for histopathological examination. The dog survived for another 1.5 years and was then euthanized because of age and regrowth of the tumour. Necropsy was not performed. (Table ). Clinical findings All three dogs were female with a mean age of 11 years at the time of the first examination. All three tumours initially appeared as a protrusion on the right side with a firm or multilobulated hyperaemic mass swelling on the palpebral surface of the nictitating membrane. In two cases the tumour was grossly nonpigmented, the third being slightly pigmented on the surface and ulcerated. All three tumours were surgically resected. All three tumours recurred after the first surgery and two of the three recurred after a second surgery. One of two recurred after a third surgery even though exenteration was performed. In the two other cases that developed recurrences, the nictitating membrane had been resected and this extended the time to the next recurrence. Histopathological findings Case 1 (third resection) had a well-demarcated, non-encapsulated and expansile highly cellular mass that was expanding the conjunctival substantia propria adjacent to the leading edge on the palpebral surface of the right nictitating membrane (Fig. b), thereby mildly compressing the adjacent gland. The mass was composed of plump spindle cells that frequently formed vascular lumina where erythrocytes were present (Fig. c). Cells showed prominent nucleoli and there were 11 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF (high power fields). The mass was not present at the surgical margins. Case 2 (first resection) presented with pleomorphic spindle tumour cells in a fascicular pattern expanding the conjunctival substantia propria (Fig. b), multifocally abutting on and elevating the epithelium on the palpebral surface of the right nictitating membrane. The mass was moderately demarcated, nonencapsulated, was expansile and moderately infiltrative at the periphery. Cells were pleomorphic, exhibiting karyomegaly and occasionally contained coarsely granular brown pigment in their cytoplasm. There were 8 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF). Occasional intraepithelial nests of atypical, variably pigmented cells were noted (junctional activity). The mass was present at the surgical margins. Case 3 (second resection) presented with a fibrillary infiltration of pleomorphic tumour cells expanding the conjunctival substantia propria at the leading edge of the palpebral surface of the right nictitating membrane. Tumour cells were plump spindle and multifocally showed a vacuolated cytoplasm (Fig. b). Rare cells showed sparse coarsely granular brown pigment in their cytoplasm. There were 11 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF). The surgical margin was not affected, the narrowest margin was 3 mm. Immunohistochemical findings The immunohistochemistry revealed positive staining for vimentin in all three cases, especially in case where it was strongly positive. S-100 was also positive in all three cases though only sparsely-moderately in case and , and strongly positive in 80% of the cells in case . SMA was slightly positive in neoplastic cells of case and strongly positive in 50% of the neoplastic cells in case . Cytokeratin staining was negative in the tumour cells of all the three cases. GFAP was positive in both case and , though specifically strongly in case . All cases were concluded to be negative when staining for Melan-A according to the human protocol. With the dog-adapted protocol Melan-A was moderately positive in approximately 30% of the cells of case , with appropriate cytoplasmic staining (Fig. c). In case , Melan-A showed occasional cells with strong cytoplasmic staining, representing less than 10% of the cells in section (Fig. c). PNL2 showed strong cytoplasmic positivity in approximately 20% of the surface of the tumour from case (Fig. d). In the tumour from case PNL2 showed moderate to strong cytoplasmic staining in approximately 50% of the surface of the neoplasm (Fig. d). (Table ). Diagnosis Case 1 had a classical pattern for hemangiosarcoma. Case and were at first diagnosed as sarcomas according to the human protocol, but after performing immunohistochemistry with dog-adapted protocols finally diagnosed as lightly pigmented melanomas. (Table ). Prognoses All three patients underwent several surgeries but had recurrence when euthanized within 22–40 months after the first surgery. One of these dogs showed systemic signs with coughing, suggesting potential metastatic disease, when euthanized 30 months after the first surgery. (Table ). A 12 kg, 8-year-old intact female Dachshund dog presented with a hematoma-like lesion on the palpebral surface of the right nictitating membrane, the rest of the ophthalmic examination was normal. Initially the hematoma was concluded to be due to trauma but after one week of progressive enlargement, a tumour was suspected. Surgical excision of the mass was performed under general anaesthesia. This mass was not sent for histopathology. The surgical site healed uneventfully but after 1.5 months regrowth was noticed. Another excision was performed under general anaesthesia. Two months later, a further regrowth appeared on the same site. This time the tumour had a multilobulated and cystic appearance (Fig. a). A larger resection of the tumour and the nictitating membrane was performed under general anaesthesia. This resection was sent for histopathology. The dog survived for another 3 years and was then euthanized for reasons unrelated to this disease. At this point another regrowth was observed in the remnants of the nictitating membrane. Necropsy was not performed. (Table ). A 37.5 kg, 12-year-old intact female white coated Akita dog presented with a hyperaemic, grossly nonpigmented tumour arising from the palpebral surface of the right nictitating membrane (Fig. a). The ophthalmic examination was otherwise normal. The firm hyperaemic mass and part of the nictitating membrane was resected and sent for histopathological examination. A plain chest radiograph was performed which revealed multiple small nodular densities suspected to be metastases, with age-related changes as a differential diagnosis. A year and a half later, a regrowth of the tumour was observed in the nictitating membrane of the medial canthus of the right eye. An additional chest radiograph at that time showed no further development in the nodular densities. A larger resection of the nictitating membrane including the tumour was performed under general anaesthesia. The histopathological diagnosis confirmed the suspicion of recurrence of the resected tumour. Seven months later the dog had exophthalmos with exotropia and a mass in the medial canthus. A routine blood profile revealed lymphopenia and slightly elevated serum calcium. As the dog had exophthalmos and the owner did not want to euthanize the dog, an exenteration was performed. During this surgery the mass, which was now involving the medial and retrobulbar area of the orbit, was resected. Within four months there was recurrence in the medial aspect of the cutaneous scar. This area was resected, but within one month the dog had developed respiratory signs with coughing and was euthanized. Necropsy was not performed. (Table ). A 9 kg, 14-year-old intact female tricolour Shetland Sheepdog presented with an on the surface partly pigmented pendulous tumour on the palpebral surface of the nictitating membrane of the right eye. The tumour was ulcerated which was suspected to be due to self-trauma. The ophthalmological examination was otherwise normal. There was no evidence of systemic disease on the general physical examination. Blood count and routine biochemistry were both normal. The tumour was initially resected under local anaesthesia by a veterinary ophthalmologist. Four months after excision the tumour recurred (Fig. a). The dog was then fully anesthetized and large areas of the nictitating membrane were surgically removed and sent for histopathological examination. The dog survived for another 1.5 years and was then euthanized because of age and regrowth of the tumour. Necropsy was not performed. (Table ). All three dogs were female with a mean age of 11 years at the time of the first examination. All three tumours initially appeared as a protrusion on the right side with a firm or multilobulated hyperaemic mass swelling on the palpebral surface of the nictitating membrane. In two cases the tumour was grossly nonpigmented, the third being slightly pigmented on the surface and ulcerated. All three tumours were surgically resected. All three tumours recurred after the first surgery and two of the three recurred after a second surgery. One of two recurred after a third surgery even though exenteration was performed. In the two other cases that developed recurrences, the nictitating membrane had been resected and this extended the time to the next recurrence. Case 1 (third resection) had a well-demarcated, non-encapsulated and expansile highly cellular mass that was expanding the conjunctival substantia propria adjacent to the leading edge on the palpebral surface of the right nictitating membrane (Fig. b), thereby mildly compressing the adjacent gland. The mass was composed of plump spindle cells that frequently formed vascular lumina where erythrocytes were present (Fig. c). Cells showed prominent nucleoli and there were 11 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF (high power fields). The mass was not present at the surgical margins. Case 2 (first resection) presented with pleomorphic spindle tumour cells in a fascicular pattern expanding the conjunctival substantia propria (Fig. b), multifocally abutting on and elevating the epithelium on the palpebral surface of the right nictitating membrane. The mass was moderately demarcated, nonencapsulated, was expansile and moderately infiltrative at the periphery. Cells were pleomorphic, exhibiting karyomegaly and occasionally contained coarsely granular brown pigment in their cytoplasm. There were 8 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF). Occasional intraepithelial nests of atypical, variably pigmented cells were noted (junctional activity). The mass was present at the surgical margins. Case 3 (second resection) presented with a fibrillary infiltration of pleomorphic tumour cells expanding the conjunctival substantia propria at the leading edge of the palpebral surface of the right nictitating membrane. Tumour cells were plump spindle and multifocally showed a vacuolated cytoplasm (Fig. b). Rare cells showed sparse coarsely granular brown pigment in their cytoplasm. There were 11 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF). The surgical margin was not affected, the narrowest margin was 3 mm. (third resection) had a well-demarcated, non-encapsulated and expansile highly cellular mass that was expanding the conjunctival substantia propria adjacent to the leading edge on the palpebral surface of the right nictitating membrane (Fig. b), thereby mildly compressing the adjacent gland. The mass was composed of plump spindle cells that frequently formed vascular lumina where erythrocytes were present (Fig. c). Cells showed prominent nucleoli and there were 11 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF (high power fields). The mass was not present at the surgical margins. (first resection) presented with pleomorphic spindle tumour cells in a fascicular pattern expanding the conjunctival substantia propria (Fig. b), multifocally abutting on and elevating the epithelium on the palpebral surface of the right nictitating membrane. The mass was moderately demarcated, nonencapsulated, was expansile and moderately infiltrative at the periphery. Cells were pleomorphic, exhibiting karyomegaly and occasionally contained coarsely granular brown pigment in their cytoplasm. There were 8 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF). Occasional intraepithelial nests of atypical, variably pigmented cells were noted (junctional activity). The mass was present at the surgical margins. (second resection) presented with a fibrillary infiltration of pleomorphic tumour cells expanding the conjunctival substantia propria at the leading edge of the palpebral surface of the right nictitating membrane. Tumour cells were plump spindle and multifocally showed a vacuolated cytoplasm (Fig. b). Rare cells showed sparse coarsely granular brown pigment in their cytoplasm. There were 11 mitotic figures/standard area of 2.37 mm 2 (corresponding to 10 HPF). The surgical margin was not affected, the narrowest margin was 3 mm. The immunohistochemistry revealed positive staining for vimentin in all three cases, especially in case where it was strongly positive. S-100 was also positive in all three cases though only sparsely-moderately in case and , and strongly positive in 80% of the cells in case . SMA was slightly positive in neoplastic cells of case and strongly positive in 50% of the neoplastic cells in case . Cytokeratin staining was negative in the tumour cells of all the three cases. GFAP was positive in both case and , though specifically strongly in case . All cases were concluded to be negative when staining for Melan-A according to the human protocol. With the dog-adapted protocol Melan-A was moderately positive in approximately 30% of the cells of case , with appropriate cytoplasmic staining (Fig. c). In case , Melan-A showed occasional cells with strong cytoplasmic staining, representing less than 10% of the cells in section (Fig. c). PNL2 showed strong cytoplasmic positivity in approximately 20% of the surface of the tumour from case (Fig. d). In the tumour from case PNL2 showed moderate to strong cytoplasmic staining in approximately 50% of the surface of the neoplasm (Fig. d). (Table ). Case 1 had a classical pattern for hemangiosarcoma. Case and were at first diagnosed as sarcomas according to the human protocol, but after performing immunohistochemistry with dog-adapted protocols finally diagnosed as lightly pigmented melanomas. (Table ). had a classical pattern for hemangiosarcoma. Case and were at first diagnosed as sarcomas according to the human protocol, but after performing immunohistochemistry with dog-adapted protocols finally diagnosed as lightly pigmented melanomas. (Table ). All three patients underwent several surgeries but had recurrence when euthanized within 22–40 months after the first surgery. One of these dogs showed systemic signs with coughing, suggesting potential metastatic disease, when euthanized 30 months after the first surgery. (Table ). In this series of malignant spindle cell tumours from the nictitating membrane we include a hemangiosarcoma, which is quite uncommon in the Nordic countries, and two lightly pigmented melanomas which are even more rarely seen [C. R. Reilly et al., ACVO 2005 abstract no.: 39]. We used dog-adapted immunohistochemical protocols to secure a correct diagnosis. The breeds involved in this study were a 12 kg Dachshund, a 37,5 kg Akita, and a 9 kg Shetland Sheepdog. The Dachshund with the hemangiosarcoma corresponded well to what is recorded in former studies on nictitating membrane hemangiosarcoma where there is no apparent breed disposition, but middle and large-size dogs are more commonly affected . The white coated Akita and the tricolour Shetland Sheepdog with the melanomas were not black coated as former studies have indicated, but on the other hand these tumours were only very sparsely pigmented. All three patients in this study were intact females, which may be an effect of the low numbers of cases included in this report, as most former studies on conjunctival hemangiosarcoma and melanomas demonstrated no sex predilection or a slight male predominance [J. W. Herrmann et al., ACVO 2016 abstract no.: 123]. Only one study suggested a female predominance in conjunctival melanomas . This is even contrary to the studies that have suggested a hormonal link between hemangiosarcoma in general and neutering status, where occurrence of hemangiosarcomas is more common in neutered individuals . The age at onset was 8–14 years (mean 11 years), which correlates well with the age span reported in former studies . The clinical signs of protrusion of the nictitating membrane with firm or irregular masses of different shades of pink, expanding the nictitating membrane most often in the nonpigmented areas, epiphora and/or conjunctival hyperaemia are as described in former studies on nictitating membrane neoplasia . The blood blister-like appearance is quite typical for hemangiosarcomas . The ulcerated surface of the melanoma in case could be an indication that the tumour was becoming devitalized or a sign of self-trauma. All three cases were positioned on the palpebral surface of the nictitating membrane. In former studies tumour growth in general has been reported on the bulbar surface, the palpebral surface or from the leading edge of the nictitating membrane . Hemangiosarcomas and hemangiomas seem to originate more often from either the palpebral surface or the leading edge of the nictitating membrane . In this case series the hemangiosarcoma arose on the palpebral surface of a sparsely pigmented nictitating membrane, correlating with the theory that there is an increased risk factor for developing vascular neoplasia when the nonpigmented nictitating membrane is exposed to UV light . No evidence of actinic damage (solar elastosis, solar vasculopathy or solar fibrosis) was noted around the neoplasms; however, these lesions are not always visible around UV-induced tumours. As a prevention one could therefore make sure to have limited sun exposure for their dog, even though the UV index in Scandinavia according to WHO is only around half of what it is in Southern Europe, in Australia and in USA . Both the hemangiosarcoma and the melanomas recurred one or several times after the first surgery. Sarcomas in general and melanomas both demonstrate aggressive, locally invasive tissue involvement. Some studies on hemangiosarcomas even discuss de novo tumours arising from the same location . Early, complete surgical excision of hemangiosarcomas is recommended and may be curative, though recurrence is a risk . Melanomas have the potential to occur and recur multifocally with recurrence being a risk even after aggressive surgical treatment like a complete excision of the nictitating membrane [C. R. Reilly et al., ACVO 2005 abstract no.: 39]. Full-thickness resection of the whole nictitating membrane seemed to delay recurrence by at least 1.5 years in this case series. The treatment of choice for hemangiosarcomas and melanomas is surgery. The prognosis is probably better when surgery is performed by a surgeon with microsurgical skills and access to an operating microscope and microsurgical equipment . Radiation therapy and systemic chemotherapy has been used with success in non-dermal hemangiosarcomas . For the melanomas, both radiation therapy, systemic chemotherapy and immunotherapy is proven to be successful . None of the above-mentioned therapies were applied to the three cases in this series. All the three cases showed recurrence when euthanized 22–40 months after the first surgery. The presence of respiratory signs in case indicated possible metastatic spread, which is also described in earlier studies, although metastasis was not confirmed in this case . The three cases were diagnosed by histopathology and immunohistochemistry. Immunohistochemistry has in the recent years been used in several studies on tumours of the nictitating membrane . It is a valuable tool for the pathologist to further differentiate the type of tumour being analysed and to diagnose more uncommon tumours and new subtypes giving us new knowledge. It is essential to use dog-adapted protocols to get a precise diagnosis . These dog-adapted immunohistochemical protocols secure a correct antigen retrieval and thereby a correct diagnosis of the tumour. Antigen retrieval is a technique required in most formalin-fixed tissues before immunohistochemical staining. It is used to reverse epitope masking and restore epitope-antibody binding often lost during the fixation process. Avoiding this step may result in weak or false negative staining . We report three rare cases of spindle cell tumours in the nictitating membrane in dogs. The tumours arose in nonpigmented areas of membrana nictitans and they showed invasive growth with post-surgical recurrence in all three dogs, once or several times. We performed extensive histopathological and immunohistochemical investigations to further subclassify these tumours. It is essential to use dog-adapted immunohistochemical protocols to reach the correct diagnosis.
Affinity Ultrafiltration Mass Spectrometry for Screening Active Ingredients in Traditional Chinese Medicine: A Review of the Past Decade (2014–2024)
f15049c7-8bcb-459c-9d35-9eb9da12ab29
11820328
Surgical Procedures, Operative[mh]
Traditional Chinese medicine (TCM), a remarkable medical resource with a long history, has unique advantages in preventing and treating various diseases, particularly in the control of major epidemics and clinical treatment. Approximately 35% of the global pharmaceutical market annually derives directly or indirectly from natural products, predominantly from plant sources (25%), with microbial sources (13%) and animal sources (3%) following . The active ingredients of TCM form the material basis for its therapeutic effects and serve as an important source of biologically active compounds. However, TCM and its formulations often contain numerous chemical components, and the complexity of these mixtures makes the evaluation and identification of active ingredients highly challenging. Thus, identifying the active ingredients in TCM is a critical scientific challenge in its modernization and a significant bottleneck in its global development. The traditional strategy for researching active ingredients in TCM involves “chemical extraction and separation, molecular structure identification, and pharmacological activity evaluation” . Although effective, this strategy is cumbersome and time-consuming, making it challenging to efficiently screen active structures. Modern pharmacological research indicates that a drug’s affinity for biological macromolecules is the first step in its mechanism of action, and the drug target is the critical starting point for its therapeutic effects in vivo . Small molecules in TCM regulate biological processes and exert medicinal effects by interacting with target proteins in organisms. Consequently, molecular targeting methods for drug screening, based on disease-related biomacromolecules, have emerged. Affinity ultrafiltration (AUF)–liquid chromatography (LC)–mass spectrometry (MS) is a solution affinity selection platform that separates target–ligand complexes in solution via ultrafiltration. It serves as a powerful tool for identifying active molecules within complex natural products. Compared with traditional methods, AUF is simple to operate, and it significantly reduces screening time and lowers the consumption of samples and reagents. The technology enables the online integration of various detection instruments, allowing for an accurate reflection of the interaction between the natural conformation of active substances and receptors. Due to its high sensitivity and strong selectivity, AUF-LC-MS holds unique value in small-molecule drug discovery and has garnered widespread attention from the pharmaceutical community. Before that, Chen et al. also provided an overview, summary, and outlook on AUF-LC-MS technology. On this basis, this review provides a more comprehensive review of the basic principles, characteristics, and influencing factors of AUF-LC-MS technology, and summarizes its application in the screening of bioactive components of medicinal plants in the past ten years. For example, Panax ginseng has many functions such as enhancing immunity, anti-fatigue, and antioxidants. Panax ginseng is rich in saponins, which have a wide range of benefits for the human body. Modern pharmacological research shows that the most important ones are ginsenosides Rg1, Re, and Rb1 . In recent years, researchers have used α-glucosidase, acetylcholinesterase, Monoamine oxidase type-B, and N -methyl-D-aspartic acid as targets, and adopted AUF-LC-MS technology to screen out 24, 16, 7, and 3 active ingredients, respectively . In addition, 5, 12, and 32 active ingredients were also screened from Coptis chinensis Franch, Salvia miltiorrhiza Bge., Curcuma longa , etc. . Please refer to for specific contents, which provide a certain scientific basis for rapid targeted screening of active ingredients in medicinal plants. This review also discusses the adaptability of this technology to a wider range of natural products and its combination with other analytical techniques, and prospects for its development, so that AUF technology can be widely used internationally. AUF combines affinity capture with ultrafiltration, facilitating high-throughput compound screening . Developed in 1981, this technique initially evaluated ultrafiltration’s theoretical and experimental applications in clinical serum binding assays . Discovering drug target proteins is crucial for drug research . In the late 1990s, AUF became widely used in targeted drug discovery and an indispensable tool for many pharmaceutical companies. Over the past decade, significant advancements have been made in AUF in terms of membrane materials, separation properties, and system optimization. Many new affinity membrane materials have been developed recently to enhance the selectivity and performance of the membranes. The separation capabilities of these membranes are enhanced by introducing various affinity ligands or through surface modifications. For instance, the use of hydrophilic polymers, nanomaterials, or composite materials enhances the affinity and anti-fouling properties of these membranes . To enhance their separation performance, researchers have improved the separation abilities of AUF membranes by combining various affinity ligands, such as different antibodies, proteins, or small molecules. Particularly in complex biological systems, this multi-level separation significantly enhances the purity and efficiency of target molecule separation . Additionally, AUF technology has increasingly adopted automation and intelligent control systems to enhance operational efficiency. For example, the use of real-time sensors to monitor membrane status, in combination with machine learning algorithms for automatic adjustments, enhances both the performance and the operational ease of the membrane system . The ultrafiltration screening method is based on ligand–receptor-specific binding, with screening potential active ligands binding to the target protein by disease-specific characteristics . First, the ligand mixture is combined with the receptor. After ultrafiltration, the ligand dissociates from the receptor, or the binding part is directly observed. Finally, the potential active ingredients are analyzed by LC-MS. AUF-MS is mainly divided into centrifugal ultrafiltration-MS (CU-MS) and pulsed ultrafiltration-MS (PU-MS). In both methods, the basic principle of small-molecule screening is the same: ligand enrichment is achieved through the selectivity of a semi-permeable membrane. The CU-MS by ultrafiltration chamber and LC-MS platform operate independently, necessitating manual injection of ultrafiltration samples into the LC-MS system, hence the term “off-line ultrafiltration”. CU-MS employs commercial ultrafiltration centrifuge tubes to screen compounds, offering straightforward procedures and good reproducibility. Chen et al. developed an off-line ultrafiltration-LC-MS platform to screen for inhibitors of α-glucosidase and pancreatic lipase. Fifteen potential ligands, including glucomoringin, 3-caffeoylquinic acid, and quinic acid, were quickly screened and identified from Moringa oleifera leaf extracts. The study identified 14 potential α-glucosidase ligands and 10 potential pancreatic lipase ligands. Feng et al. captured 12 phytochemicals with varying affinities for topoisomerase I, topoisomerase II, COX-2, and ACE2 from Dysosma versipellis root and stem extracts by using an off-line ultrafiltration-LC-electrospray ionization (ESI)-MS/MS model. In vitro antiproliferation tests demonstrated that podophyllotoxin and quercetin had the strongest inhibition rates on A549 and HT-29 cells, whereas kaempferol exhibited a significant dose-dependent effect on COX-2. Additionally, quercetin exhibited a strong inhibitory effect on ACE2 . PU-MS consists of a flow chamber, a magnetic stirrer, and an ultrafiltration membrane . It is an online combination of PU and electrospray MS. After the test sample and target protein are added to the flow chamber, the ligand–receptor complex and inactive components can be separated by applying pressure. Unlike CU-MS, this technology is an online affinity MS screening method. Hence, it is also referred to as online ultrafiltration. PU-MS was first proposed by van Breemen et al. to screen potential compounds binding to target receptors from complex systems. Adenosine deaminase inhibitors were successfully identified from a combinatory chemical library of 20 adenosine analogs by using this method. Beverly et al. utilized PU-MS to evaluate a 35 μL binding chamber’s ability to screen ligands forming noncovalent complexes with protein targets. They found that the platform quickly screened and enriched the carbonic anhydrase inhibitor acetazolamide from bacterial fermentation broth extracts, completing the process in 5 min only. Compared with PU, CU cannot be integrated with MS online. Additionally, the concentration polarization during centrifugal ultrafiltration can reduce the filtration speed, and in severe cases, cause protein adsorption and deposition on the membrane surface, affecting free drug transport. Thus, CU is primarily used for screening small-molecule active compounds within a limited range. By contrast, PU, which easily integrates with LC-MS to form an automated, high-throughput system, is more effective for describing receptor–ligand binding characteristics, drug metabolism, and product identification. In conclusion, the advantage of ultrafiltration-based methods lies in their ability to rapidly provide binding information between drug targets and compounds. These methods can be used to study the synergistic or antagonistic effects of multiple compounds. The history of MS traces back to the early 20th century with the invention of the parabolic mass spectrometer by J.J. Thomson. In 1919, Aston developed the first velocity-focusing MS, marking a significant milestone in the field. Initially, MS was primarily used to determine the atomic weight of elements and isotopes. With advancements in ion optics theory, the technology continually improved, and by the late 1950s, it was widely applied in the analysis of inorganic and organic compounds. Owing to its high sensitivity, accuracy, and resolution, MS has become one of the most crucial analytical techniques in life sciences, medicine, and chemistry . The advent of MS technology, particularly soft ionization methods like ESI and matrix-assisted laser desorption/ionization (MALDI), has extended the application of MS to the early stages of drug discovery, specifically in the identification of lead compounds . Compared with earlier detection methods, MS does not require derivatization or isotope labeling, thereby expanding the range of applicable compounds, accelerating detection, and enhancing sensitivity and specificity. Thus, integrating MS with target affinity techniques—referred to as target molecule affinity-MS—has made drug screening more efficient and effective. In recent years, numerous MS techniques have been developed to address the increasing demand for analyzing and identifying specific components within complex substrates from multiple perspectives. These include techniques such as AUF-LC-MS, ESI-Q-TOF-MS, ultrahigh-performance LC (UPLC)–Orbitrap–(time-of-flight) TOF-MS, MALDI-TOF-MS, LC-MS, GC-MS, FT-ICR-MS, and DART-MS. Based on the above explanation, the ultrafiltration method effectively enriches and separates ligands that bind to target proteins while being easy to operate and cost-effective. AUF can screen ligand–protein complexes from unbound substances, and when combined with LC and MS, it enables rapid separation and identification of potential active ingredients. It can identify target substances at various concentrations, and it is suitable for analyzing small quantities of complex mixtures such as combinatorial compound libraries and extracts or fractions of medicinal plants. When AUF is combined with LC-MS n , the high sensitivity of MS compensates for the limitations of LC in detecting minute components with low sample content . As a high-throughput method, AUF-LC-MS performs well in screening active substances without stringent sample size requirements and offers additional advantages such as simplicity of operation and strong specificity. However, this method has certain limitations: false positives resulting from nonspecific adsorption in the ultrafiltration process typically need to be addressed through parallel control experiments using an inactivated target protein group or a serum protein replacement group. Additionally, ultrafiltration screening is primarily based on the affinity between the target protein and the ligand. As a result, while it evaluates the ligand’s affinity for the target protein, it does not directly reflect the ligand’s biological activity . Currently, various methods exist for screening active ingredients, such as cell membrane chromatography, magnetic bead screening, UV-visible spectroscopy, nuclear magnetic resonance (NMR), fluorescence, and electrochemical methods . Compared with these methods, the combination of AUF and MS for screening small-molecule active substances in TCM offers several advantages, including ease of operation, high sensitivity, and specific results. Traditional chromatographic methods based on optical or radioactive substances often encounter matrix interference . This interference complicates the identification and analysis of complex components in natural products. For instance, UV-visible spectroscopy measures α-glucosidase activity by hydrolyzing p -nitrophenyl- α - D -glucopyranoside, producing p -nitrophenol, detectable at 400 nm . However, NMR is time-consuming and not well suited for rapid inhibitor screening. Additionally, fluorescence and electrochemical methods suffer from significant interference issues . Consequently, a rapid and accurate method to screen active compounds with inhibitory effects is urgently needed. The ligand matrix does not affect the screening process of affinity MS. Thus, this unique advantage renders it particularly suitable for screening active ingredients in complex systems, especially traditional medicinal plants. Owing to its high sensitivity and selectivity, AUF-LC-MS has been effectively utilized to isolate and identify target substances from complex samples, playing a pivotal role in extracting active molecules from natural products. In AUF, researchers study the interactions between small drug molecules and biological targets in solution. Binding between AUF receptors and ligands occurs in solution, which avoids alterations in their properties from labeling or chemical coupling to solid supports, thereby preserving their natural conformation and interactions. Ultrafiltration requires only small quantities of the target, and some protein targets can be reused, making it a viable option when targets are costly, scarce, or available in limited quantities . Alternatively, the retention capability of the ultrafiltration membrane allows for the direct selection of active components that bind to target substances without the need for pretreatment, such as in immobilized enzyme online MS and cell membrane chromatography-MS . AUF-MS enables rapid determination of binding constants between biological targets and small drug molecules while concurrently providing activity data for these molecules. In the combined AUF-MS approach, AUF exhibits robust specificity and screening capabilities for small ligands in complex mixtures. Meanwhile, LC-MS offers potent functionality for efficient separation and structural identification, effectively minimizing matrix interference. AUF-LC-MS is extensively utilized for screening active ingredients from complex substrates because of its high-throughput capabilities. However, this technique has limitations, including the possibility that some identified candidates may not exhibit the expected activity or may show elevated activity, leading to potential false positives . Various factors must be considered during the experimental process, including the concentration of the target and screening substances, the material of the ultrafiltration membrane, the selection of the dissociation solvent, the interception volume, the co-incubation time, the centrifugal speed, and the solution pH, to mitigate false-positive or false-negative results. The screening conditions must be optimized to ensure the high efficiency and specificity of the screening results, and operations should be rationally designed and standardized. Additionally, the design of negative control experiments is crucial for reducing false positives and improving the accuracy of the results . 4.1. Concentration of the Target and the Screened Substances The concentrations of targets and screening substances are critical factors influencing the affinity filtration process. If the ligand concentration is significantly higher than that of the target protein, it may prevent some active ingredients from binding to the target proteins because ligand binding to target proteins is inherently competitive, leading to false negative results. Conversely, if the ligand concentration is too low, it may enhance nonspecific adsorption, thus increasing the likelihood of false positives. These false positives are often due to the nonspecific binding of the compound to the target protein. Yang et al. were the first to verify AUF-LC screening results to eliminate false positives by using competitive binding experiments. In fact, competitive binding experiments not only eliminate false positives but also exclude ligands that bind to different sites than those of competitively binding compounds. Wang et al. evaluated the feasibility of using competitive binding experiments combined with AUF-LC to identify xanthine oxidase (XOD) inhibitors in Perilla frutescens (L.) Britt., aiming to reduce false positives. In the experiment, P. frutescens extracts were incubated with XOD-free, XOD-present, or XOD-blocked active sites before ultrafiltration, and the total binding degree and specific binding degree of each compound were calculated on the basis of peak area. The results indicated that AUF-LC significantly reduced the number of false positives identified. However, this method cannot eliminate all false positives and may exclude some effective inhibitors. Therefore, a thorough methodological review is essential to obtain reliable binding results. The equilibrium dissociation constant (KD) is a critical metric for evaluating the interaction between a ligand and its target protein, with each component having its own distinct KD value. The KD values of the receptor and target ligand should be closely matched; otherwise, significant discrepancies may result in false positives or false negatives. In general, the receptor concentration should be close to the KD value of the weakest ligand. If the ligand concentration is too high, only ligands with strong binding affinity could bind to the target protein at competitive binding sites. Therefore, in actual experiments, the ligand concentration should be equal to or less than that of the receptor. Wang et al. developed an AUF-UPLC method to directly determine the KD of compounds in P. frutescens extracts and their target proteins, including the KD determination for α-glucosidase ligands in the ethyl acetate fraction of P. frutescens. The recovery rate, binding degree, and signal-to-noise ratio of α-glucosidase ligands in PFEA were determined using AUF-LC, followed by KD calculation using the proposed equilibrium. Oleanolic acid and apigenin were identified as high-affinity ligands of α-glucosidase, with KDs of 44.9 and 88.5 μM, respectively. These values were consistent with the results from isothermal titration calorimetry, kinetic analysis, and molecular docking simulations. The results demonstrate that this method is simple and easy to implement, allowing direct determination of KD values for compounds in natural product extracts without the need for internal standards or calibration agents. Optimizing these methods can enhance the screening accuracy and reliability of AUF-LC-MS, providing a robust foundation for the identification of active ingredients in complex substrates. 4.2. Ultrafiltration Membrane Material In AUF-LC-MS, ultrafiltration membranes separate ligand–receptor complexes from unbound components. The selection of ultrafiltration membranes primarily involves two factors: pore size and material . An ideal ultrafiltration membrane should effectively retain the target biological macromolecules while preventing leakage or clogging. The pore size should be less than one-third of the biomacromolecule’s size to ensure effective retention . Selecting the appropriate pore size improves separation efficiency and prevents leakage of unbound components. An ideal ultrafiltration membrane material should minimize specific adsorption with potential ligands and receptors. Common ultrafiltration membrane materials include polyvinyl fluoride, polysulfone, polyether ketone, and methylcellulose . These materials exhibit low nonspecific binding and are therefore widely used in ultrafiltration membrane production. Selecting the appropriate pore sizes and materials optimizes the separation efficiency of AUF-LC-MS and enhances the accuracy and reliability of the experiment, ensuring the authenticity of ligand–receptor interactions and reducing false-positive results. 4.3. Choice of Dissociation Solvent The complex components, diverse structures, and varying polarities of TCM extracts make it challenging to successfully dissociate ligands from the affinity target while minimizing nonspecific adsorption, a key factor affecting screening results. Two main methods are currently used to denature enzymes: adding acid to the dissociation solvent to inactivate the enzyme in a low pH environment or using organic solvents for enzyme denaturation. However, using organic solvent-based dissociation solutions only can sometimes increase nonspecific adsorption. Some related studies have demonstrated that acid-containing organic solvents, as opposed to those with organic solvents only, can effectively reduce nonspecific adsorption of non-affinity interacting substances. For example, Xie et al. used a methanol–water (90:10) mixture to screen potential TCM components targeting 5-lipoxygenase and cyclooxygenase-2. Comparison of the ultrafiltrate chromatograms between the experimental and control groups revealed significant differences in the peak areas of active ingredients, with lower signals for nonspecifically adsorbed substances. Conversely, some researchers have successfully screened small-molecule inhibitors of cyclooxygenase and glutathione reductase from TCM by using dissociation solutions containing organic solvents only . The findings indicate that different dissociation solutions yield varying effects, necessitating multiple experimental attempts to optimize dissociation conditions. In summary, selecting an appropriate dissociation solvent is essential for reducing nonspecific adsorption and enhancing the accuracy of screening results. Multiple experimental attempts are recommended to identify the optimal dissociation conditions by comparing results, thereby effectively screening the active ingredients in TCM extracts. The concentrations of targets and screening substances are critical factors influencing the affinity filtration process. If the ligand concentration is significantly higher than that of the target protein, it may prevent some active ingredients from binding to the target proteins because ligand binding to target proteins is inherently competitive, leading to false negative results. Conversely, if the ligand concentration is too low, it may enhance nonspecific adsorption, thus increasing the likelihood of false positives. These false positives are often due to the nonspecific binding of the compound to the target protein. Yang et al. were the first to verify AUF-LC screening results to eliminate false positives by using competitive binding experiments. In fact, competitive binding experiments not only eliminate false positives but also exclude ligands that bind to different sites than those of competitively binding compounds. Wang et al. evaluated the feasibility of using competitive binding experiments combined with AUF-LC to identify xanthine oxidase (XOD) inhibitors in Perilla frutescens (L.) Britt., aiming to reduce false positives. In the experiment, P. frutescens extracts were incubated with XOD-free, XOD-present, or XOD-blocked active sites before ultrafiltration, and the total binding degree and specific binding degree of each compound were calculated on the basis of peak area. The results indicated that AUF-LC significantly reduced the number of false positives identified. However, this method cannot eliminate all false positives and may exclude some effective inhibitors. Therefore, a thorough methodological review is essential to obtain reliable binding results. The equilibrium dissociation constant (KD) is a critical metric for evaluating the interaction between a ligand and its target protein, with each component having its own distinct KD value. The KD values of the receptor and target ligand should be closely matched; otherwise, significant discrepancies may result in false positives or false negatives. In general, the receptor concentration should be close to the KD value of the weakest ligand. If the ligand concentration is too high, only ligands with strong binding affinity could bind to the target protein at competitive binding sites. Therefore, in actual experiments, the ligand concentration should be equal to or less than that of the receptor. Wang et al. developed an AUF-UPLC method to directly determine the KD of compounds in P. frutescens extracts and their target proteins, including the KD determination for α-glucosidase ligands in the ethyl acetate fraction of P. frutescens. The recovery rate, binding degree, and signal-to-noise ratio of α-glucosidase ligands in PFEA were determined using AUF-LC, followed by KD calculation using the proposed equilibrium. Oleanolic acid and apigenin were identified as high-affinity ligands of α-glucosidase, with KDs of 44.9 and 88.5 μM, respectively. These values were consistent with the results from isothermal titration calorimetry, kinetic analysis, and molecular docking simulations. The results demonstrate that this method is simple and easy to implement, allowing direct determination of KD values for compounds in natural product extracts without the need for internal standards or calibration agents. Optimizing these methods can enhance the screening accuracy and reliability of AUF-LC-MS, providing a robust foundation for the identification of active ingredients in complex substrates. In AUF-LC-MS, ultrafiltration membranes separate ligand–receptor complexes from unbound components. The selection of ultrafiltration membranes primarily involves two factors: pore size and material . An ideal ultrafiltration membrane should effectively retain the target biological macromolecules while preventing leakage or clogging. The pore size should be less than one-third of the biomacromolecule’s size to ensure effective retention . Selecting the appropriate pore size improves separation efficiency and prevents leakage of unbound components. An ideal ultrafiltration membrane material should minimize specific adsorption with potential ligands and receptors. Common ultrafiltration membrane materials include polyvinyl fluoride, polysulfone, polyether ketone, and methylcellulose . These materials exhibit low nonspecific binding and are therefore widely used in ultrafiltration membrane production. Selecting the appropriate pore sizes and materials optimizes the separation efficiency of AUF-LC-MS and enhances the accuracy and reliability of the experiment, ensuring the authenticity of ligand–receptor interactions and reducing false-positive results. The complex components, diverse structures, and varying polarities of TCM extracts make it challenging to successfully dissociate ligands from the affinity target while minimizing nonspecific adsorption, a key factor affecting screening results. Two main methods are currently used to denature enzymes: adding acid to the dissociation solvent to inactivate the enzyme in a low pH environment or using organic solvents for enzyme denaturation. However, using organic solvent-based dissociation solutions only can sometimes increase nonspecific adsorption. Some related studies have demonstrated that acid-containing organic solvents, as opposed to those with organic solvents only, can effectively reduce nonspecific adsorption of non-affinity interacting substances. For example, Xie et al. used a methanol–water (90:10) mixture to screen potential TCM components targeting 5-lipoxygenase and cyclooxygenase-2. Comparison of the ultrafiltrate chromatograms between the experimental and control groups revealed significant differences in the peak areas of active ingredients, with lower signals for nonspecifically adsorbed substances. Conversely, some researchers have successfully screened small-molecule inhibitors of cyclooxygenase and glutathione reductase from TCM by using dissociation solutions containing organic solvents only . The findings indicate that different dissociation solutions yield varying effects, necessitating multiple experimental attempts to optimize dissociation conditions. In summary, selecting an appropriate dissociation solvent is essential for reducing nonspecific adsorption and enhancing the accuracy of screening results. Multiple experimental attempts are recommended to identify the optimal dissociation conditions by comparing results, thereby effectively screening the active ingredients in TCM extracts. 5.1. High-Throughput Screening (HTS) of Active Ingredients of TCM Efficient and rapid screening of active ingredients from complex systems, such as TCM, remains a key challenge in modern pharmaceutical research. Traditional methods of chemical separation, structural identification, and activity screening face the following several issues: unclear objectives, cumbersome procedures, high workload, lengthy processes, and potential loss of active ingredients. Recent pharmacological research has demonstrated that the affinity between drugs and biological macromolecules—such as enzymes, receptors, DNA, and RNA—is crucial for drug action. Molecular targeting strategies for drug screening have emerged, focusing on disease-related biological macromolecules as targets. Ultrafiltration offers excellent separation and minimizes matrix interference, whereas LC-MS provides powerful analytical capabilities for the rapid identification of multiple components. Combining these technologies to discover small-molecule active ingredients in TCM holds significant potential. Recently, this combined approach has been successfully applied to the screening of lead compounds, compound libraries, and active ingredients from natural products. Numerous studies have confirmed that this method rapidly screens and identifies complex ligands in natural products . In recent years, scientists have frequently combined AUF with MS detection to screen active ingredients in combinatory chemical libraries, identifying novel inhibitors of key targets like α-glucosidase. α-Glucosidase is a key enzyme in carbohydrate hydrolysis, cleaving the α-1,4-glucoside bond at the non-reducing end of oligosaccharides, thereby releasing glucose and raising blood sugar levels. α-Glucosidase inhibitors reduce glucose production by inhibiting this enzyme’s activity, and they are widely used in the treatment of type 2 diabetes mellitus (T2DM) . Although some α-glucosidase inhibitors derived from microorganisms, such as acarbose and voglibose, are used clinically, they can cause severe gastrointestinal side effects . Natural α-glucosidase inhibitors from medicinal plants offer potential as alternative treatments for T2DM due to their low toxicity. Consequently, researchers have recently screened potential α-glucosidase inhibitors from various natural plants, including Cichorium glandulosum Boiss. et Huet, a chicory species in the Asteraceae family and a traditional Uighur medicinal plant. C. glandulosum is listed as a “medicinal food homology” item in the 2015 Catalogue of Homologous Medicine and Food by the National Health and Family Planning Commission of China. Studies have shown that chicory exhibits significant hypoglycemic activity and inhibits α-glucosidase . Chen et al. used AUF-LC-MS to screen and identify four potential α-glucosidase inhibitors from C. glandulosum seed extract to further investigate its hypoglycemic components. The preliminary identification included esculetin, chlorogenic acid, isochlorogenic acid B, and osochlorogenic acid A. Subsequently, Abudurexiti A et al. used AUF to screen C. glandulosum extracts, identifying the following six potential α-glucosidase inhibitors: quercetin, lactucin, 3- O -methylquercetin, hyperoside, lactucopicrin, and isochlorogenic acid B. Potential α-glucosidase inhibitors have been screened from various natural plants, including the leaves of Rubus suavissimus and Inonotus obliquus and the roots of Siraitia grosvenorii . The screening results of α-glucosidase-targeted active ingredients are detailed in . Medicinal plants have been widely used to treat various diseases for thousands of years owing to their value as natural resources. Extracting biologically active compounds from medicinal plants has become a major focus of research worldwide. Chemical components in medicinal plants often have low abundance, complex structures, and multiple biological targets. The active ingredients and mechanisms of action are often challenging to define precisely. AUF-LC-MS is well suited for screening active ingredients in complex natural products. This technology combines the separation and analytical strengths of AUF and LC-MS, facilitating HTS and rapid identification of bioactive components in complex natural products. Andrographis paniculata (Burm. f.) Wall. ex Nees is derived from the dried aboveground parts of the plant. It exhibits a broad range of pharmacological activities in vivo and in vitro studies, with anti-inflammatory effects being the most prominent. Cyclooxygenase-2 (COX-2) is a key enzyme in prostaglandin (PG) synthesis, and its inhibitors are effective anti-inflammatory agents. Jiao developed an AUF-based analytical method combined with UPLC and quadrupole TOF-MS (BAUF-UPLC-Q-TOF-MS) for rapid screening and identification of COX-2 ligands. Five COX-2 inhibitors were identified from A. paniculata extracts. Apart from its anti-inflammatory properties, A. paniculata exhibits immunomodulatory and antiviral effects. Feng screened 11 potential ligands from A. paniculata targeting COX-2, IL-6, and ACE2. In addition to the previously mentioned disease-related targets, AUF-MS can be used to screen 24 target active ingredients, including lipase, thrombin, and tyrosinase (TYR). Lipase catalyzes the hydrolysis of fats (lipids). Lipase inhibitors regulate lipids by inhibiting the catalytic activity of human pancreatic lipase, a key enzyme in triacylglycerol hydrolysis, aiding in the control or treatment of obesity-related conditions. TYR is a rate-limiting enzyme in melanin production. Albinism is a genetic disorder caused by mutations in the TYR gene, leading to impaired TYR production. Thrombin (FIIα) is a key enzyme in thrombosis and a downstream component of the coagulation pathway. It converts fibrinogen into fibrin and coagulation factor XIII into factor XIIIα. This process combines with calcium ions to form the fibrin network, a critical step in thrombosis. Consequently, FIIα has gained widespread attention as a target for antithrombotic therapies. summarizes the applications of AUF-MS in screening natural product extracts from January 2014 to May 2024. 5.2. Screening of Active Ingredients in TCM Compound Preparations TCM compound preparations are formulated on the basis of TCM theory. Their chemical components are highly complex, making it challenging to rapidly screen and identify active ingredients using conventional analytical methods. Historically, clarifying the bioactive components and mechanisms of action in single medicinal plants has been difficult, let alone in natural drug formulas, due to their low content, complex chemical structures, and multicomponent, multitarget effects. AUF-LC-MS remains one of the most powerful tools for screening active compounds from complex natural products . In recent years, Ronghua Dai’s research group has employed AUF-LC-MS to study the interactions between extracts of Zishen Pills, a TCM compound preparation, and biological target proteins. COX-2 is a key enzyme that catalyzes the conversion of arachidonic acid (AA) into PGs. It is specifically induced during inflammation, degeneration, and tumorigenesis. The research group employed AUF-LC-MS to investigate the interaction between Zishen Pill extract and COX-2, selecting celecoxib and glipizide as positive and negative controls, respectively. The study identified 20 compounds that specifically bind to COX-2, 8 of which are potential COX-2 inhibitors. Their structures were elucidated using Fourier transform ion cyclotron resonance MS. Further validation was conducted using in vitro COX-2 inhibition assays and molecular docking studies. Additionally, the research group further investigated the interaction between Zishen Pills and 5-lipoxygenase (5-LOX) inhibitors . It was found that 5-LOX plays a crucial role in inflammatory processes, and it is a key enzyme in the metabolism of AA to leukotriene A4 (LTA4). The research team optimized the concentration of 5-LOX enzyme, incubation conditions (temperature and time), pH, and ionic strength based on prior experiments to achieve more accurate screening results. The screening results indicated that six compounds may possess potential 5-LOX inhibitory activity, with anemarrhenasaponin I, timosaponin AI, nyasol, and demethyleneberberine demonstrating significant enzyme inhibition. Further, structure–activity relationship studies revealed that the hydroxyl group is essential for ligand binding to the 5-LOX protein, followed by the aromatic ring, which engages in π–π interactions with amino acid residues in the 5-LOX protein. This study provides a scientific foundation for the development of 5-LOX inhibitors. Efficient and rapid screening of active ingredients from complex systems, such as TCM, remains a key challenge in modern pharmaceutical research. Traditional methods of chemical separation, structural identification, and activity screening face the following several issues: unclear objectives, cumbersome procedures, high workload, lengthy processes, and potential loss of active ingredients. Recent pharmacological research has demonstrated that the affinity between drugs and biological macromolecules—such as enzymes, receptors, DNA, and RNA—is crucial for drug action. Molecular targeting strategies for drug screening have emerged, focusing on disease-related biological macromolecules as targets. Ultrafiltration offers excellent separation and minimizes matrix interference, whereas LC-MS provides powerful analytical capabilities for the rapid identification of multiple components. Combining these technologies to discover small-molecule active ingredients in TCM holds significant potential. Recently, this combined approach has been successfully applied to the screening of lead compounds, compound libraries, and active ingredients from natural products. Numerous studies have confirmed that this method rapidly screens and identifies complex ligands in natural products . In recent years, scientists have frequently combined AUF with MS detection to screen active ingredients in combinatory chemical libraries, identifying novel inhibitors of key targets like α-glucosidase. α-Glucosidase is a key enzyme in carbohydrate hydrolysis, cleaving the α-1,4-glucoside bond at the non-reducing end of oligosaccharides, thereby releasing glucose and raising blood sugar levels. α-Glucosidase inhibitors reduce glucose production by inhibiting this enzyme’s activity, and they are widely used in the treatment of type 2 diabetes mellitus (T2DM) . Although some α-glucosidase inhibitors derived from microorganisms, such as acarbose and voglibose, are used clinically, they can cause severe gastrointestinal side effects . Natural α-glucosidase inhibitors from medicinal plants offer potential as alternative treatments for T2DM due to their low toxicity. Consequently, researchers have recently screened potential α-glucosidase inhibitors from various natural plants, including Cichorium glandulosum Boiss. et Huet, a chicory species in the Asteraceae family and a traditional Uighur medicinal plant. C. glandulosum is listed as a “medicinal food homology” item in the 2015 Catalogue of Homologous Medicine and Food by the National Health and Family Planning Commission of China. Studies have shown that chicory exhibits significant hypoglycemic activity and inhibits α-glucosidase . Chen et al. used AUF-LC-MS to screen and identify four potential α-glucosidase inhibitors from C. glandulosum seed extract to further investigate its hypoglycemic components. The preliminary identification included esculetin, chlorogenic acid, isochlorogenic acid B, and osochlorogenic acid A. Subsequently, Abudurexiti A et al. used AUF to screen C. glandulosum extracts, identifying the following six potential α-glucosidase inhibitors: quercetin, lactucin, 3- O -methylquercetin, hyperoside, lactucopicrin, and isochlorogenic acid B. Potential α-glucosidase inhibitors have been screened from various natural plants, including the leaves of Rubus suavissimus and Inonotus obliquus and the roots of Siraitia grosvenorii . The screening results of α-glucosidase-targeted active ingredients are detailed in . Medicinal plants have been widely used to treat various diseases for thousands of years owing to their value as natural resources. Extracting biologically active compounds from medicinal plants has become a major focus of research worldwide. Chemical components in medicinal plants often have low abundance, complex structures, and multiple biological targets. The active ingredients and mechanisms of action are often challenging to define precisely. AUF-LC-MS is well suited for screening active ingredients in complex natural products. This technology combines the separation and analytical strengths of AUF and LC-MS, facilitating HTS and rapid identification of bioactive components in complex natural products. Andrographis paniculata (Burm. f.) Wall. ex Nees is derived from the dried aboveground parts of the plant. It exhibits a broad range of pharmacological activities in vivo and in vitro studies, with anti-inflammatory effects being the most prominent. Cyclooxygenase-2 (COX-2) is a key enzyme in prostaglandin (PG) synthesis, and its inhibitors are effective anti-inflammatory agents. Jiao developed an AUF-based analytical method combined with UPLC and quadrupole TOF-MS (BAUF-UPLC-Q-TOF-MS) for rapid screening and identification of COX-2 ligands. Five COX-2 inhibitors were identified from A. paniculata extracts. Apart from its anti-inflammatory properties, A. paniculata exhibits immunomodulatory and antiviral effects. Feng screened 11 potential ligands from A. paniculata targeting COX-2, IL-6, and ACE2. In addition to the previously mentioned disease-related targets, AUF-MS can be used to screen 24 target active ingredients, including lipase, thrombin, and tyrosinase (TYR). Lipase catalyzes the hydrolysis of fats (lipids). Lipase inhibitors regulate lipids by inhibiting the catalytic activity of human pancreatic lipase, a key enzyme in triacylglycerol hydrolysis, aiding in the control or treatment of obesity-related conditions. TYR is a rate-limiting enzyme in melanin production. Albinism is a genetic disorder caused by mutations in the TYR gene, leading to impaired TYR production. Thrombin (FIIα) is a key enzyme in thrombosis and a downstream component of the coagulation pathway. It converts fibrinogen into fibrin and coagulation factor XIII into factor XIIIα. This process combines with calcium ions to form the fibrin network, a critical step in thrombosis. Consequently, FIIα has gained widespread attention as a target for antithrombotic therapies. summarizes the applications of AUF-MS in screening natural product extracts from January 2014 to May 2024. TCM compound preparations are formulated on the basis of TCM theory. Their chemical components are highly complex, making it challenging to rapidly screen and identify active ingredients using conventional analytical methods. Historically, clarifying the bioactive components and mechanisms of action in single medicinal plants has been difficult, let alone in natural drug formulas, due to their low content, complex chemical structures, and multicomponent, multitarget effects. AUF-LC-MS remains one of the most powerful tools for screening active compounds from complex natural products . In recent years, Ronghua Dai’s research group has employed AUF-LC-MS to study the interactions between extracts of Zishen Pills, a TCM compound preparation, and biological target proteins. COX-2 is a key enzyme that catalyzes the conversion of arachidonic acid (AA) into PGs. It is specifically induced during inflammation, degeneration, and tumorigenesis. The research group employed AUF-LC-MS to investigate the interaction between Zishen Pill extract and COX-2, selecting celecoxib and glipizide as positive and negative controls, respectively. The study identified 20 compounds that specifically bind to COX-2, 8 of which are potential COX-2 inhibitors. Their structures were elucidated using Fourier transform ion cyclotron resonance MS. Further validation was conducted using in vitro COX-2 inhibition assays and molecular docking studies. Additionally, the research group further investigated the interaction between Zishen Pills and 5-lipoxygenase (5-LOX) inhibitors . It was found that 5-LOX plays a crucial role in inflammatory processes, and it is a key enzyme in the metabolism of AA to leukotriene A4 (LTA4). The research team optimized the concentration of 5-LOX enzyme, incubation conditions (temperature and time), pH, and ionic strength based on prior experiments to achieve more accurate screening results. The screening results indicated that six compounds may possess potential 5-LOX inhibitory activity, with anemarrhenasaponin I, timosaponin AI, nyasol, and demethyleneberberine demonstrating significant enzyme inhibition. Further, structure–activity relationship studies revealed that the hydroxyl group is essential for ligand binding to the 5-LOX protein, followed by the aromatic ring, which engages in π–π interactions with amino acid residues in the 5-LOX protein. This study provides a scientific foundation for the development of 5-LOX inhibitors. The fingerprint analysis of active ingredients in TCM is crucial for quality control and evaluation. Although traditional chemical fingerprints can reflect the overall characteristics of TCM, they are limited because the selected chemical components may not correspond directly to those that produce clinical effects. Therefore, integrating high-throughput screening technologies, such as AUF-LC-MS, to identify active ingredients in TCM and further obtain their biological fingerprints can address the limitations of chemical fingerprints and offer a novel approach for evaluating the efficacy of TCM. Recently, Mingquan Guo’s research group has made significant progress in studying Rhamnus davurica Pall. by using AUF-LC-MS. They established an AUF-LC-MS-based method to successfully screen and identify ligands in R. davurica that are potentially active against therapeutic targets like top I and COX-2 . The study identified 12 potential top I ligands and 11 potential COX-2 ligands, further demonstrating that these components exhibit anti-inflammatory and anti-proliferative activities in vitro. This study not only proposes a novel method to reveal the diverse active ingredients of TCM and their potential targets but also underscores the importance of biological fingerprint analysis in TCM research. By integrating bioaffinity technology with MS, the characteristics of active ingredients in TCM can be understood more comprehensively, providing a more scientific basis for its quality control. This approach not only addresses the limitations of traditional chemical fingerprinting but also enhances the accuracy of TCM efficacy evaluation. Future research should focus on exploring the biological fingerprints of various TCMs to advance the quality standardization and modernization of TCM, ultimately supporting its broader application in clinical practice. In the analysis of small-molecule drug metabolites, modern analytical methods are diverse and highly efficient, with LC-MS being particularly prominent. This technology not only efficiently separates and detects drugs and their metabolites but also provides detailed structural information and supports metabolic pathway research, significantly advancing the fields of pharmacokinetics and pharmacodynamics. AUF-LC-MS, a pretreatment technique, has been widely applied in drug metabolism research. This method combines ultrafiltration technology with online LC-MS analysis to rapidly and efficiently assess the metabolic rate and extent of drugs at affinity targets like liver microsomes. Van Breemen and colleagues successfully used AUF-LC-MS to evaluate the metabolic characteristics of tricyclic psychotropic drugs like promethazine and to reveal the structural features of their main metabolites. Huang et al. demonstrated the potential of AUF-LC-MS in studying the pharmacological activity of natural products. They employed this technique to screen for potential lipoxygenase inhibitors in Saposhnikovia divaricata (Trucz.) Schischk. They also identified multiple metabolic pathways by using semi-preparative HPLC separation and in vitro cytochrome P450 metabolism studies, offering new approaches for evaluating the medicinal value of natural products. Methodologically, the advantage of AUF-LC-MS lies in its simplicity and high-throughput capabilities, making it particularly suitable for metabolite analysis and structural identification of complex samples. This technology not only allows researchers to quickly obtain pharmacokinetic data but also provides valuable structure–activity relationship information during drug design and optimization. In recent years, TCM has demonstrated unique advantages in treating complex diseases owing to its multicomponent and multitarget characteristics. Traditional methods often struggle to analyze the chemical components and pharmacological mechanisms of TCM. Bioaffinity MS offers a novel approach to address this issue. Notably, AUF-LC-MS has been widely applied in screening active ingredients in TCM owing to its high efficiency and simplicity. AUF-LC-MS shows significant potential in TCM research. This approach involves combining medicinal plant extracts with specific protein targets, using ultrafiltration to separate the conjugates, and then identifying the bound active ingredients through LC-MS. Recent studies have shown that the AUF-LC-MS yielded remarkable results in screening targets such as α-glucosidase, cyclooxygenase-2, and thrombin. This study found that compounds isolated from traditional Chinese medicine by using this method exhibited excellent enzyme inhibitory activity, with high selectivity and specificity. Although the AUF-LC-MS method holds promising prospects for screening active ingredients in TCM, it still possesses limitations and faces various challenges. First, given the complex nature of TCM compounds, molecular interactions may compromise analytical accuracy. Future efforts might incorporate computational biology techniques to predict and confirm inter-component interactions, thereby enhancing the accuracy of screening and analysis for potentially active components. Secondly, employing multi-stage ultrafiltration membranes or a series of ultrafiltration tubes could facilitate the development of multichannel or high-throughput AUF systems, significantly enhancing the efficiency and precision of multi-target screening. Additionally, despite the therapeutic benefits of TCM volatiles like monoterpenes and sesquiterpenes, their volatile and low-density nature leads to immobilization and trapping challenges during the AUF process. Improvements might be achieved by employing tightly sealed reaction vessels, developing specialized ultrafiltration membranes, or operating in low-temperature conditions to minimize the loss of volatile components. Given the unique characteristics of volatile components, integrating auxiliary technologies such as gas chromatography for pretreatment or post-treatment might improve screening accuracy and efficiency. Moreover, false positives and nonspecific binding restrict the wider application of this technique. Utilizing enzyme denaturation controls, together with enzyme activity assays and molecular docking, can significantly mitigate nonspecific binding and boost screening accuracy. Considering that MS and LC analysis tools have become more miniaturized and automated, the application of AUF-LC-MS is expected to become more widespread and in-depth. In the future, the use of this technology in screening TCM active ingredients should extend beyond common targets to include more significant protein targets. This approach could facilitate the discovery of new drugs and enhance the understanding of the pathogenesis of complex diseases. Therefore, AUF-MS is a powerful tool for identifying and studying the mechanisms of active ingredients in TCM. With ongoing innovations and improvements, this method is likely to play a more significant role in natural product research and new drug development. Future research should focus on overcoming current technical bottlenecks and identifying more disease-related protein targets, which would advance modern research on TCM.
Rare disease day and Ophthalmology
1e9bbf5c-d64f-4763-bdfb-bc5038c244cf
11826610
Ophthalmology[mh]
Sexual and reproductive health communication intervention for caretakers of adolescents: a quasi-experimental study in Unguja- Zanzibar
1e112b31-f538-4d72-a43b-13aab7d6a1da
6599269
Health Communication[mh]
This study mainly focused on adolescents’ Sexual and Reproductive Health (SRH). With the expected impact of reducing risky sexual behaviour of adolescents, the strategy of utilizing parent or parent figure (caretakers) was adopted in which parenting practice of communicating with adolescents on sexual and reproductive health issues was targeted. Realizing the needs, strength and deficit of caretakers in relation to SRH communication, the intervention guided by the behavioral model (Information-Motivation-Behavioral skills (IMB) model) was implemented to equip caretakers with necessary SRH knowledge, motivation and skills to communicate. The effect of this intervention on improving SRH related knowledge, motivation, and skills was then evaluated after 1 month, and its effect on improving parent-child SRH communication was evaluated after 6 months and 1 year following the intervention. Of the 836 respondents, 667 were female, 736 were married, 341 were of higher educational attainment. The average age of all participants was 45.7 years, and 350 have female adolescents and 640 live with their biological adolescents. The intervention resulted in increase of SRH related knowledge, motivation and skills to communicate after 1 month, and a sustained increase of parent-child communication after 6 months and 1 year follow up. In conclusion: The study has found out that caretakers can adopt the role of SRH educators if they are provided with necessary support even in areas where cultural norms discourage such communication. IMB-based SRH communication intervention should be considered the panacea to empower the community so that parents could teach their adolescent children about SRH. Sexual and Reproductive Health (SRH) becomes a major area of concern during adolescence because of the apparent risky sexual behaviours which include early age sexual debut, multiple sexual partners, unprotected sexual intercourse, and sexual intercourse while under the influence of alcohol or drugs . These behaviours increase the risk of unintended pregnancy and/or Sexually Transmitted Infections (STIs) including Human Immunodeficiency Virus (HIV) infection. In Tanzania more than half of the population is under the age of 20. About 13% of women would have had sex by the age of 15 years, and 59% by the age of 18 years . HIV infection among young people is of particular concern whereby 1.4 million people are living with HIV and 5.1% prevalence is among 15 to 49-year-olds . Communication about SRH between parents/caretakers and children/adolescents has received a great deal of attention recently. Evidences show that children who talk with their parents about sexual matters are more likely to postpone sexual activity, have fewer sexual partners and are more likely to use contraceptives and condoms . Most caretakers however do not communicate with their adolescents because they find this task as daunting and they often feel ill-equipped. Moreover, caretakers do not communicate because of the belief that such communication is immoral, contrary to traditional values, and it is likely to encourage premarital sexual activity . Thus studies have called for the intervention involving parents, and it has been suggested that issues interrelated to sex and parental responsiveness should be addressed more systematically, as this may impact parent-child SRH communication practice . Although studies on caretaker-adolescent communication on SRH are increasing in Sub Saharan Africa (SSA) , this area has not been well studied in Tanzania. The qualitative studies in Tanzania indicate that some parent-child communication does occur but not in a friendly way. Parents seem to be using fear, threats and physical discipline to ensure their adolescents do not engage in sexual activities. Communication takes the form of warning, and sometimes the language and expressions are ambiguous. Some topics, like the use of condoms are avoided and that communication is mainly about abstinence, HIV/AIDS and unwanted pregnancy . Zanzibar has been recognised for its rich and splendid cultural and religious legacy, far different from Tanzania Mainland, whereby any explicit discussion related to sexuality is normally discouraged. Therefore, in order to improve parent-child SRH communication in Zanzibar, there is a need to implement and evaluate the effectiveness of SRH communication intervention among caretakers of adolescents. In Zanzibar, the only SRH communication intervention implemented is DARAJA curriculum through UJANA project which utilised personnel from UMATI-Zanzibar branch . The evaluation results of this project were not published. Moreover, the availability of training personnel from UMATI who were trained for this curriculum in Zanzibar has prompted us to replicate this intervention, and evaluate its effectiveness. In addition, it is important to note that the contents of DARAJA curriculum include interesting components necessary for behaviour change like SRH knowledge, motivation and skills. Therefore, this time we decided to incorporate the Information-Motivation-Behavioural skills (IMB) model as a framework to guide the implementation and evaluation of this intervention. The use of this model will help us to take into account the specific cultural and religious context of Zanzibar during the intervention implementation. The use of this model in this research also was expected to strengthen its significance in informing SRH communication intervention projects and guide their evaluation. The main objective of this study therefore was to evaluate the effect of SRH communication intervention in improving caretaker-adolescent communication through improving the information, motivation and behavioural skills among caretakers of adolescents in Unguja-Zanzibar. In this study, Berlo’s Sender-Message-Channel-Receiver (SMCR) model of communication was applied to describe the communication process, and Information-Motivation-Behavioural skills (IMB) model was applied to describe the occurrence of communication practice of caretakers. Berlo’s SMCR model of communication focuses on encoding and decoding processes which happen before sender sends the message and before receiver receives the message respectively. It has mainly four components to describe the communication process. They are sender, message, channel and receiver. The model describes several factors affecting the individual components in the communication making the communication more efficient. The factors that are related to sender and receiver are communication skills, attitude, knowledge, social system and culture. According to this model, if the sender and receiver have good communication skills, the message will be communicated better and the receiver can grasp the message. The attitude of the sender and the receiver creates the effect of the message, also familiarity with the subject of the message makes the communicated message have its effect more. Values, beliefs, laws, rules, religion and many other social factors affect the sender’s way of communicating the message and cultural differences make messages different and thus my hinder effective communication . Information-Motivation-Behavioural Skills (IMB) model was adopted to inform the SRH communication intervention and guide its evaluation. Theory-based research is necessary to identify the determinants of SRH communication which can be targeted in intervention. The IMB model was first used on HIV preventive behaviour change to examine risk reduction behaviours in at-risk adolescents . In more recent years, the model has been used to predict more general health-related behaviours. The concept of this model has been empirically validated to a diverse population including adults and adolescents and diverse health behaviours . This model has received considerable attention because it does not only provide a relatively simple explanation for complex health behaviours but also identifies constructs that are needed for behaviour change . When using this model, it is recommended that one should begin with elicitation research to identify the deficits and strengths in relation to IMB components. The next step is to develop intervention components to address the deficits, and finally make the evaluation to determine the effectiveness of the intervention. According to the IMB model (Fig. ), information (knowledge), motivation and behavioural skills are the fundamental determinants for the initiation and maintenance of health behaviours. It postulates that performing a health behaviour is a function of the extent to which someone is well-informed about the behaviour, motivated to perform the behaviour (e.g., has positive personal beliefs and attitudes towards the behaviour, perceiving vulnerability and have social support to practice the behaviour), and has the behavioural skills (objective skills) requisite skills to execute the behaviour and confidence (perceived efficacy) in their ability to do so across various situations . The IMB model can be extended to evaluate the impact of distal factors that influence an individual’s behavior (Fig. ). Distal factors, such as parents’ influence on adolescent’s behaviors, are important to measure. Thus, one can never view individuals’ behavior without taking account the environment and relationships. The parent expansion of IMB model is adopted from the parent expansion of Theory of Planned Behaviour (TPB) . In this study, we therefore hypothesized a direct effect from SRH communication intervention to each of the mediating variables (Information, Motivation and Behavioral skills). We also hypothesized a direct effect from each of the mediating variables to the outcome variable which is SRH communication. We then hypothesized a direct effect from the SRH communication intervention to the outcome SRH communication. This study therefore is set out to test the hypothesis that “The mean-score of SRH communication, information, motivation and behavioural skills at post-test measure of the experimental group is not equal to that of the control group adjusting for their pre-test mean-scores and the measures on which the groups differed”. Setting and design This study was conducted in Unguja-Zanzibar, a part of United Republic of Tanzania. The design of this study was quasi-experimental non-randomized controlled pre and post-test research design. The study involved all the 6 districts within the Island whereby two wards (one experimental and one control) in each district were selected. In total, there were six experimental wards and six control wards. The distance between experimental and control wards was considered so as to avoid the likelihoods of intervention diffusion between wards . Although there are minor differences between the experimental and control wards due to their distance apart, the groups have much in common in terms of their tradition and social structure since they are located in the same district. To carry out this research, three specific phases were undertaken as follows: First, the baseline phase aimed to determine the existing level of information, motivation and behaviour skills pertaining to communication on SRH, and this constituted the pre-test scores. The second phase which was carried out at least 1 month after the first phase involved the implementation of SRH communication intervention [DARAJA (THE BRIDGE) curriculum] to experimental groups. The control groups did not receive any of the experimental intervention but were exposed to SRH information only. Thirdly, evaluation was carried out at three time-intervals; one-month after completion of the intervention programme, followed by 6 months after the intervention, and then at 1 year follow up, participants were assessed to determine if the intervention has had sustained effect on their communication practice. All surveys were administered using the same structured questionnaire. Study participants The study population were all male and female caretakers of adolescents aged 15–19 years. Study subjects were eligible to participate in the study after given their consent; these caretakers were either biological parents or parent figures who must have stayed continuously with the adolescents for at least 2 years prior to the survey. Caretakers who were staying with young people who were married were considered ineligible for the study. Moreover, participants do not have to be literate to participate in this intervention study. A three-stage probability sampling technique was used to select the individuals. In all the six districts in Unguja, two out of eight wards in each district were purposively selected, one being experimental and the other being control, making a total of 12 wards. Simple random sampling was then used to select 3 streets (shehia) in each ward making a total of 36 shehias. Systematic random sampling was then used to select 28 (1000 participants/36 shehias) households from a sampling frame consisting of approximately 450 houses in each shehia . After the first household entered, the next 16th (450/28) household with eligible subjects was chosen for interviewing. This process continued until the target sample size was obtained. In each household, if both male and female caretakers were present, a male caretaker was deliberately chosen because of prior experience that male caretakers are difficult to reach because they have their activities mostly outside their home, unlike female caretakers . In houses with multiple households, (for example compound houses) one household was randomly selected for interviewing. The number of participants involved in each time-point of the study is provided in flow chart of follow up (Fig. ). One thousand participants completed a pre-test assessment at baseline with 503 participants forming the experimental group, and 497 participants forming the control group. Out of 1000 participants, 962 (96.2%) participated in phase two of the study with 484 (50.3%) participants in the experimental group. At one-year follow-up, data was available from 871/1000 of the baseline cohort. To be included in the analysis of intervention effect, participants had to have attended all the intervention sessions and completed the pre-test and all the post-tests measures. In all, 836/1000 were included in the final analysis with 439/503 (87.2%) participants of the experimental group and 397/497 (79.8%) of the control group. Most of the participants who had dropped out were not present at their homes during the data collection exercise, and two had died. A comparison of people included in the analyses with those excluded because of loss to follow-up revealed no significant differences in terms of demographic characteristics and baseline outcome measures. Intervention procedure This research protocol was approved by the Muhimbili University of Health and Allied Sciences (MUHAS), and by the Ministry of Health and Social Welfare in Zanzibar. Written informed consent was obtained from all interested caretakers. The DARAJA / BRIDGE curriculum intervention employed in this study was developed by the American Red Cross Society and adapted for Tanzania by the collaboration of Tanzania Red Cross and Family Health International (360) Tanzania. In this study, the contents of DARAJA curriculum were related to IMB constructs such that the Information construct was dealt in lesson 4 and 6, Motivation construct was dealt in lesson 1,2 and 3 and Behavioral skills was dealt in lesson 5 and 7. This intervention is delivered in three phases (Bridges); Bridge 1 engages caretakers alone, Bridge 2 involves adolescents alone, and Bridge 3 involves both adolescents and caretakers combined together. Therefore, according to this curriculum structure, the intervention is carried out for two conservative days, in which in the first day bridge 1 and bridge 2 were implemented separately, and on the second day bridge 3 was implemented. In each bridge, the training sessions lasted for 5 h and 25 min. The intervention delivery methods included lectures, games, group discussions, role-plays and brainstorming. On the other hand, the SRH information-only workshops was delivered in the same way as for the experimental groups by using lecture method and discussion. Initially, the elicitation research as recommended by Fischer and Fisher employed qualitative Focus Group Discussion (FGD) and a quantitative questionnaire with a sample of a target population was carried out . While most parts of the intervention contents based on the available DARAJA curriculum were maintained, the interview and FGD of the initial phase helped to better understand important components that needed to be emphasised during the second phase, so as to fit the context of caretakers in Unguja. For example, during FGDs, discussing the use of condoms and other contraceptive methods with adolescents was strongly opposed by participants; this part had to be omitted in adolescents’ sessions. However in caretakers sessions the topic on the use of condoms was introduced as one of the safest family planning methods, with the belief that if it was accepted they would later on acknowledge its significance in preventing pregnancy even to adolescents. Measures A standardized scale has not yet been developed to measure the components of the IMB model for caretaker-adolescent’s SRH communication. Questions were developed from the literature on adolescents and sex education to measure sexual communication. An IMB measure used in another study on behavioral change was used in the present study and guided the development of information, motivation and behavioral skills scale to measure SRH communication practice. Content validity of this scale was assessed through peer review and internal consistency reliability for each measure was calculated through a pilot study on 50 caretakers. After revisions, the final version of the scale was prepared. It took 10 min to complete the questionnaire and it was administered by trained interviewers in Kiswahili language spoken by all participants. Information construct assessed one’s knowledge of SRH by 15 items. Participants were required to mention spontaneously the contents and importance of SRH (e.g. what topics of SRH a caretaker discussed with the adolescent? One point was awarded for a correct-match item mentioned. The maximum score of the information construct was 15 points, (Cronbach’s Alpha coefficient was 0.42 at pre-test and 0.93 at post-test). Motivation assessed caretakers perceived risk (3 items), social norms (3 items) and attitude (5 items) towards SRH communication to adolescents (e.g., Communicating SRH matters with adolescents will promote promiscuity). Items on a Likert scale with options ranging from 1 = strongly disagree to 4 = strongly agree, and the maximum score possible for this construct was 44. Cronbach’s alpha coefficient was 0.68 at pre-test and 0.89 at post-test. Behavioral skills assessed perceived self efficacy (4 items) and perceived objective skills (4 items) of communicating with adolescents. Items on a Likert scale ranging from 1 = very hard to 4 = very easy, and from 1 = very ineffective to 4 = very effective (e.g., I can describe my ability to talk to my adolescent as very ineffective or very effective). Maximum score possible for this construct was 32. Cronbach’s alpha coefficient was 0.83 at pre-test and 0.88 at post-test. Caretakers-adolescents communication on SRH was assessed using two measures; global communication measure and the detailed examination of communication on specific sexual topics (overall measure of communication). Global measures (2 items) assessed if caretakers had ever communicated with their adolescent and if they have done so in the past 30 days. The response options was 1 = never to 4 = a lot. The overall communication was estimated using a weighted measure of family sexual communication scale (7 items) . In the present study, the instrument asks respondents to indicate on a Likert the extent to whether seven specific sexual topics have been discussed to either female or male adolescents, (abstinence, pregnancy, safer sex, HIV/STIs, contraceptives use, abortion, and homosexuality). Example of the question was: How frequently do you talk to your adolescent female/male about pregnancy? Scores are computed by summing all items, with higher scores indicating greater amounts of sexual communication between parents and adolescent. The maximum score is 28 point, Cronbach’s alpha was 0.81 at pre-test and 0.93 at post-test. Data analysis Analysis were performed with SPSS Version 22. Chi-square test was applied to compare demographic characteristics between experimental and control groups. Bivariate correlation between IMB variables and communication practice was calculated and Pearson correlation coefficients were reported. To evaluate the effect of intervention at the posttest measures of IMB constructs and communication practice, two models of univariate analyses of covariance (ANCOVA) were performed. The first one was for examining the immediate effect of intervention (after 1 month) on IMB constructs and communication practice measures. The second model was for the long-term effect of intervention (at 6 months and 1 year follow-up) on communication practice measure. The pretest score of IMB constructs and communication practice measures and measures on which the groups differed were considered as covariate variables. Adjusted means scores for IMB constructs and communication practice were presented. Standardized mean difference statistics of Cohen’s d was used to calculate the effect size, and the cut-off point for level of significance was set at a two-sided p -value < 0.05. Our analysis based on complete cases analysis or list-wise deletion whereby all cases with missing data were omitted. List-wise deletion is known to produce unbiased estimates and conservative results if the assumption of MCAR (Missing Complete At Random) is satisfied , and this was confirmed by comparing dataset with missing value and the other containing no missing value using t-test. The results revealed no significant differences in the sample between the two data sets. Moreover, none of the demographic characteristics and baseline outcome measures were significantly different between the follow-up sample and those who were lost to follow-up. This study was conducted in Unguja-Zanzibar, a part of United Republic of Tanzania. The design of this study was quasi-experimental non-randomized controlled pre and post-test research design. The study involved all the 6 districts within the Island whereby two wards (one experimental and one control) in each district were selected. In total, there were six experimental wards and six control wards. The distance between experimental and control wards was considered so as to avoid the likelihoods of intervention diffusion between wards . Although there are minor differences between the experimental and control wards due to their distance apart, the groups have much in common in terms of their tradition and social structure since they are located in the same district. To carry out this research, three specific phases were undertaken as follows: First, the baseline phase aimed to determine the existing level of information, motivation and behaviour skills pertaining to communication on SRH, and this constituted the pre-test scores. The second phase which was carried out at least 1 month after the first phase involved the implementation of SRH communication intervention [DARAJA (THE BRIDGE) curriculum] to experimental groups. The control groups did not receive any of the experimental intervention but were exposed to SRH information only. Thirdly, evaluation was carried out at three time-intervals; one-month after completion of the intervention programme, followed by 6 months after the intervention, and then at 1 year follow up, participants were assessed to determine if the intervention has had sustained effect on their communication practice. All surveys were administered using the same structured questionnaire. The study population were all male and female caretakers of adolescents aged 15–19 years. Study subjects were eligible to participate in the study after given their consent; these caretakers were either biological parents or parent figures who must have stayed continuously with the adolescents for at least 2 years prior to the survey. Caretakers who were staying with young people who were married were considered ineligible for the study. Moreover, participants do not have to be literate to participate in this intervention study. A three-stage probability sampling technique was used to select the individuals. In all the six districts in Unguja, two out of eight wards in each district were purposively selected, one being experimental and the other being control, making a total of 12 wards. Simple random sampling was then used to select 3 streets (shehia) in each ward making a total of 36 shehias. Systematic random sampling was then used to select 28 (1000 participants/36 shehias) households from a sampling frame consisting of approximately 450 houses in each shehia . After the first household entered, the next 16th (450/28) household with eligible subjects was chosen for interviewing. This process continued until the target sample size was obtained. In each household, if both male and female caretakers were present, a male caretaker was deliberately chosen because of prior experience that male caretakers are difficult to reach because they have their activities mostly outside their home, unlike female caretakers . In houses with multiple households, (for example compound houses) one household was randomly selected for interviewing. The number of participants involved in each time-point of the study is provided in flow chart of follow up (Fig. ). One thousand participants completed a pre-test assessment at baseline with 503 participants forming the experimental group, and 497 participants forming the control group. Out of 1000 participants, 962 (96.2%) participated in phase two of the study with 484 (50.3%) participants in the experimental group. At one-year follow-up, data was available from 871/1000 of the baseline cohort. To be included in the analysis of intervention effect, participants had to have attended all the intervention sessions and completed the pre-test and all the post-tests measures. In all, 836/1000 were included in the final analysis with 439/503 (87.2%) participants of the experimental group and 397/497 (79.8%) of the control group. Most of the participants who had dropped out were not present at their homes during the data collection exercise, and two had died. A comparison of people included in the analyses with those excluded because of loss to follow-up revealed no significant differences in terms of demographic characteristics and baseline outcome measures. This research protocol was approved by the Muhimbili University of Health and Allied Sciences (MUHAS), and by the Ministry of Health and Social Welfare in Zanzibar. Written informed consent was obtained from all interested caretakers. The DARAJA / BRIDGE curriculum intervention employed in this study was developed by the American Red Cross Society and adapted for Tanzania by the collaboration of Tanzania Red Cross and Family Health International (360) Tanzania. In this study, the contents of DARAJA curriculum were related to IMB constructs such that the Information construct was dealt in lesson 4 and 6, Motivation construct was dealt in lesson 1,2 and 3 and Behavioral skills was dealt in lesson 5 and 7. This intervention is delivered in three phases (Bridges); Bridge 1 engages caretakers alone, Bridge 2 involves adolescents alone, and Bridge 3 involves both adolescents and caretakers combined together. Therefore, according to this curriculum structure, the intervention is carried out for two conservative days, in which in the first day bridge 1 and bridge 2 were implemented separately, and on the second day bridge 3 was implemented. In each bridge, the training sessions lasted for 5 h and 25 min. The intervention delivery methods included lectures, games, group discussions, role-plays and brainstorming. On the other hand, the SRH information-only workshops was delivered in the same way as for the experimental groups by using lecture method and discussion. Initially, the elicitation research as recommended by Fischer and Fisher employed qualitative Focus Group Discussion (FGD) and a quantitative questionnaire with a sample of a target population was carried out . While most parts of the intervention contents based on the available DARAJA curriculum were maintained, the interview and FGD of the initial phase helped to better understand important components that needed to be emphasised during the second phase, so as to fit the context of caretakers in Unguja. For example, during FGDs, discussing the use of condoms and other contraceptive methods with adolescents was strongly opposed by participants; this part had to be omitted in adolescents’ sessions. However in caretakers sessions the topic on the use of condoms was introduced as one of the safest family planning methods, with the belief that if it was accepted they would later on acknowledge its significance in preventing pregnancy even to adolescents. A standardized scale has not yet been developed to measure the components of the IMB model for caretaker-adolescent’s SRH communication. Questions were developed from the literature on adolescents and sex education to measure sexual communication. An IMB measure used in another study on behavioral change was used in the present study and guided the development of information, motivation and behavioral skills scale to measure SRH communication practice. Content validity of this scale was assessed through peer review and internal consistency reliability for each measure was calculated through a pilot study on 50 caretakers. After revisions, the final version of the scale was prepared. It took 10 min to complete the questionnaire and it was administered by trained interviewers in Kiswahili language spoken by all participants. Information construct assessed one’s knowledge of SRH by 15 items. Participants were required to mention spontaneously the contents and importance of SRH (e.g. what topics of SRH a caretaker discussed with the adolescent? One point was awarded for a correct-match item mentioned. The maximum score of the information construct was 15 points, (Cronbach’s Alpha coefficient was 0.42 at pre-test and 0.93 at post-test). Motivation assessed caretakers perceived risk (3 items), social norms (3 items) and attitude (5 items) towards SRH communication to adolescents (e.g., Communicating SRH matters with adolescents will promote promiscuity). Items on a Likert scale with options ranging from 1 = strongly disagree to 4 = strongly agree, and the maximum score possible for this construct was 44. Cronbach’s alpha coefficient was 0.68 at pre-test and 0.89 at post-test. Behavioral skills assessed perceived self efficacy (4 items) and perceived objective skills (4 items) of communicating with adolescents. Items on a Likert scale ranging from 1 = very hard to 4 = very easy, and from 1 = very ineffective to 4 = very effective (e.g., I can describe my ability to talk to my adolescent as very ineffective or very effective). Maximum score possible for this construct was 32. Cronbach’s alpha coefficient was 0.83 at pre-test and 0.88 at post-test. Caretakers-adolescents communication on SRH was assessed using two measures; global communication measure and the detailed examination of communication on specific sexual topics (overall measure of communication). Global measures (2 items) assessed if caretakers had ever communicated with their adolescent and if they have done so in the past 30 days. The response options was 1 = never to 4 = a lot. The overall communication was estimated using a weighted measure of family sexual communication scale (7 items) . In the present study, the instrument asks respondents to indicate on a Likert the extent to whether seven specific sexual topics have been discussed to either female or male adolescents, (abstinence, pregnancy, safer sex, HIV/STIs, contraceptives use, abortion, and homosexuality). Example of the question was: How frequently do you talk to your adolescent female/male about pregnancy? Scores are computed by summing all items, with higher scores indicating greater amounts of sexual communication between parents and adolescent. The maximum score is 28 point, Cronbach’s alpha was 0.81 at pre-test and 0.93 at post-test. Analysis were performed with SPSS Version 22. Chi-square test was applied to compare demographic characteristics between experimental and control groups. Bivariate correlation between IMB variables and communication practice was calculated and Pearson correlation coefficients were reported. To evaluate the effect of intervention at the posttest measures of IMB constructs and communication practice, two models of univariate analyses of covariance (ANCOVA) were performed. The first one was for examining the immediate effect of intervention (after 1 month) on IMB constructs and communication practice measures. The second model was for the long-term effect of intervention (at 6 months and 1 year follow-up) on communication practice measure. The pretest score of IMB constructs and communication practice measures and measures on which the groups differed were considered as covariate variables. Adjusted means scores for IMB constructs and communication practice were presented. Standardized mean difference statistics of Cohen’s d was used to calculate the effect size, and the cut-off point for level of significance was set at a two-sided p -value < 0.05. Our analysis based on complete cases analysis or list-wise deletion whereby all cases with missing data were omitted. List-wise deletion is known to produce unbiased estimates and conservative results if the assumption of MCAR (Missing Complete At Random) is satisfied , and this was confirmed by comparing dataset with missing value and the other containing no missing value using t-test. The results revealed no significant differences in the sample between the two data sets. Moreover, none of the demographic characteristics and baseline outcome measures were significantly different between the follow-up sample and those who were lost to follow-up. The final analysis included 836 participants with 439/503 (87.2%) of the experimental group and 397/497 (79.8%) of the control group Demographic data show that the sample of caretakers was predominantly female 667 (79.8%), married 736 (88.4%), of higher educational attainment 341 (40.8%), and with mean age of 45.7 years (SD = 10.9). Majority of respondents have female adolescents 350 (41.9%), and majority stay with their biological adolescents 640 (76.6%), while only 37(4.5%) stays with adolescents of other family members. The groups differed significantly on sex, age and education level (Table ). Immediate effect of intervention on SRH information, motivation, behavioral skills and communication practice The estimated pre-test and 1 month post-test means scores from Univariate ANCOVA analysis for the immediate effects of intervention on SRH information, motivation and behavioral skills of SRH communication adjusted for pretest scores, age, sex and education level appears in Table . Information Participants in the experimental group did not significantly differed with the control group in reporting SRH information after the intervention F(1, 827) = 1.26, ( p = 0.26). Motivation Post-test attitude F(1,827) = 49.4, ( p < 0.001) and perceived risk F(1, 827) = 12.5, ( p ≤ 0.001) were statistically significantly greater in the experimental group compared to the control group; with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. However, social norms did not significantly differ by condition at post-test F(1, 827) = 0.46, ( p = 0.51). Behavioral skills Post-test perceived skills was statistically significantly greater in the experimental group compared to the control group; F(1, 827) = 10.81, ( p ≤ .001). Although the mean difference is highly statistically significant, effect size was small (d = 0.2) which indicates a non-overlap of 14.7% in the two distributions, suggesting the mean difference is not important practical. On the other hand, there is no enough evidence to support the difference observed of perceived efficacy score between the experimental group and the controls F(1, 827) = 0.95, ( p = 0.33). SRH communication As can be shown in Table , post-test SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 16.74; ( p ≤ 0.01) with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. Long term effect of intervention on SRH communication The estimated 6 months and 1 year follow-up post-tests mean scores from univariate ANCOVA analysis for the effects of intervention on SRH communication practice appear in Table . The analysis was adjusted for age, sex, education level, 1 month and 6 months measures of SRH communication (that is for 6 months follow-up and 1 year follow-up respectively). Effect of intervention on SRH communication at 6 months follow up 6 months after the completion of the intervention participants were asked how often they discussed each of SRH topic to either female or male adolescents in the preceding 6 months. The results show that SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 17.9, ( p < 0.001] with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. Effect of intervention on SRH communication at 1 year follow-up 1 year after the intervention, participants were asked how often they discussed each of SRH topic to either female or male adolescents in the preceding year. The results show that SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 40.44, ( p < 0.001) with small effect size (d = 0.4) which indicates a non-overlap of 27.4% in the two distributions. Correlation between communication practice and information, motivation and behavioural skills at 1 year post intervention Bivariate correlation was run to assess the relationship between communication practice and constructs of the IMB model. Preliminary analysis showed the relationship to be monotonic as assessed by visual inspection of a scatter plot. There was a significant positive correlation between communication and information [r = 0.22, p < 0.001], communication and motivation [r = 0.07, p = 0.05], and communication and behavioural skills [r = 0.43, p < 0.001]. Path analysis As hypothesized by the study model, SRH communication intervention had significantt effect on Motivation and Behavioral skills but not in Information. Moreover, Motivation and Behavioral skills showed a direct significant effect on communication, while Motivation also showed a significant indirect effect through behavioral skills. Finally, SRH communication intervention showed a significant effect to the communication practice (Fig. ). The estimated pre-test and 1 month post-test means scores from Univariate ANCOVA analysis for the immediate effects of intervention on SRH information, motivation and behavioral skills of SRH communication adjusted for pretest scores, age, sex and education level appears in Table . Information Participants in the experimental group did not significantly differed with the control group in reporting SRH information after the intervention F(1, 827) = 1.26, ( p = 0.26). Motivation Post-test attitude F(1,827) = 49.4, ( p < 0.001) and perceived risk F(1, 827) = 12.5, ( p ≤ 0.001) were statistically significantly greater in the experimental group compared to the control group; with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. However, social norms did not significantly differ by condition at post-test F(1, 827) = 0.46, ( p = 0.51). Behavioral skills Post-test perceived skills was statistically significantly greater in the experimental group compared to the control group; F(1, 827) = 10.81, ( p ≤ .001). Although the mean difference is highly statistically significant, effect size was small (d = 0.2) which indicates a non-overlap of 14.7% in the two distributions, suggesting the mean difference is not important practical. On the other hand, there is no enough evidence to support the difference observed of perceived efficacy score between the experimental group and the controls F(1, 827) = 0.95, ( p = 0.33). SRH communication As can be shown in Table , post-test SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 16.74; ( p ≤ 0.01) with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. Participants in the experimental group did not significantly differed with the control group in reporting SRH information after the intervention F(1, 827) = 1.26, ( p = 0.26). Post-test attitude F(1,827) = 49.4, ( p < 0.001) and perceived risk F(1, 827) = 12.5, ( p ≤ 0.001) were statistically significantly greater in the experimental group compared to the control group; with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. However, social norms did not significantly differ by condition at post-test F(1, 827) = 0.46, ( p = 0.51). Post-test perceived skills was statistically significantly greater in the experimental group compared to the control group; F(1, 827) = 10.81, ( p ≤ .001). Although the mean difference is highly statistically significant, effect size was small (d = 0.2) which indicates a non-overlap of 14.7% in the two distributions, suggesting the mean difference is not important practical. On the other hand, there is no enough evidence to support the difference observed of perceived efficacy score between the experimental group and the controls F(1, 827) = 0.95, ( p = 0.33). As can be shown in Table , post-test SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 16.74; ( p ≤ 0.01) with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. The estimated 6 months and 1 year follow-up post-tests mean scores from univariate ANCOVA analysis for the effects of intervention on SRH communication practice appear in Table . The analysis was adjusted for age, sex, education level, 1 month and 6 months measures of SRH communication (that is for 6 months follow-up and 1 year follow-up respectively). Effect of intervention on SRH communication at 6 months follow up 6 months after the completion of the intervention participants were asked how often they discussed each of SRH topic to either female or male adolescents in the preceding 6 months. The results show that SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 17.9, ( p < 0.001] with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. Effect of intervention on SRH communication at 1 year follow-up 1 year after the intervention, participants were asked how often they discussed each of SRH topic to either female or male adolescents in the preceding year. The results show that SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 40.44, ( p < 0.001) with small effect size (d = 0.4) which indicates a non-overlap of 27.4% in the two distributions. 6 months after the completion of the intervention participants were asked how often they discussed each of SRH topic to either female or male adolescents in the preceding 6 months. The results show that SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 17.9, ( p < 0.001] with small effect size (d = 0.3) which indicates a non-overlap of 21.3% in the two distributions. 1 year after the intervention, participants were asked how often they discussed each of SRH topic to either female or male adolescents in the preceding year. The results show that SRH communication was statistically significantly greater in the experimental group compared to the control group; F(1,827) = 40.44, ( p < 0.001) with small effect size (d = 0.4) which indicates a non-overlap of 27.4% in the two distributions. Bivariate correlation was run to assess the relationship between communication practice and constructs of the IMB model. Preliminary analysis showed the relationship to be monotonic as assessed by visual inspection of a scatter plot. There was a significant positive correlation between communication and information [r = 0.22, p < 0.001], communication and motivation [r = 0.07, p = 0.05], and communication and behavioural skills [r = 0.43, p < 0.001]. As hypothesized by the study model, SRH communication intervention had significantt effect on Motivation and Behavioral skills but not in Information. Moreover, Motivation and Behavioral skills showed a direct significant effect on communication, while Motivation also showed a significant indirect effect through behavioral skills. Finally, SRH communication intervention showed a significant effect to the communication practice (Fig. ). This study assessed the effect of a conceptually based intervention focusing on enhancing parent-child communication on SRH matters through improving information, motivation, and behavioral skills among caretakers of adolescents. The study also aimed to evaluate the relationship of IMB model constructs with communication practice. Findings showed that at baseline, sex, age and education level of the two groups were significantly related to SRH communication such that female caretakers, those of 50–59 years of age, and of higher education attainment were more likely to have had communicated with their adolescents. At the same time same sex communication was more common before the intervention. This could be due to feeling of shame and embarrassment when discussing SRH matters with the opposite sex . Interestingly, after the intervention there was no significant difference in communication among caretakers of the experimental group within these variables. Findings confirmed that caretakers who were exposed to intervention demonstrated significant improvement in motivation, behavioral skills and communication at immediate post-test, and significant sustained effects on communication practice after 6 months and after a one-year follow up than the control group. Despite the low non-overlap in the experimental and control distributions which may indicates it is not practical significant, the reported increase in SRH communication with adolescents over time suggests that the intervention may have helped overcome traditional and cultural barriers that restrict parent-child communication about SRH. Traditionally, there were social structures responsible for preparing adolescents to adulthood, usually a member from the extended family of the same sex (e.g. aunt (Somo) for girls, and uncle for boys. They usually emphasize on hygiene for girls, and ones chaste was cherished more for girls than boys. However, direct conversation between parents and children about sex and sexual issues was a taboo . These traditions have largely disintegrated with urban type of life and modernization, introduction of various sources of communication like the internet and western Television programs, thus people’s perceptions on traditional practices changed and take them as outdated. The findings therefore demonstrated that the intervention may have been successful in in influencing caretakers to play the non-traditional role of SRH educator, and it presented reasons to progress away from these restrictive taboos. Despite the significant effect of intervention on motivation, behavioral skills and communication practice, there was no sufficient evidence to support the difference information-mean scores between experimental and control group at post test. This is not surprising given that most of SRH information provided to the experimental group was closely related to that which was given to the control group. Similar findings have been reported in the study of Cornman et al., (2007) which utilized an IMB–model based intervention for inducing condom use behavior. This study revealed that information, motivation, and behavioral skills have a significant positive correlation with communication practice. The evidence from literature suggests that accurate information about SRH, high levels of SRH communication motivation and strong SRH communication behavioral skills are associated with higher levels of SRH communication behavior . Therefore these variables must be considered as key factors in designing SRH parent–child communication interventions. Despite the moderate association observed, identifying determinants of each component of IMB model to improve SRH communication among caretakers of adolescents may help to shape more effective interventions for promoting SRH communication among caretakers of adolescents. Since behavioral skills construct demonstrated a high correlation with communication practice, and the best way to impose skills is through direct contact where participants could get a chance to practice the behaviour and get individualized feedback at the same time. Therefore, providing this intervention message via mass media is not advised though it is expected to reach higher number of people at lower cost, but will not be as effective as expected. Community based SRH communication programmes are insufficient, but have promising results. Some intervention programmes implemented in SSA targeted parents/adults focusing on increasing SRH communication with adolescents. Such studies have utilised similar elements of parental responsiveness including knowledge, motivation, comfort, skill and confidence as utilised in this study. Consistent with those interventions, a series of facilitated sessions were carried out and participants were followed over a period of time to determine the sustained effect of intervention on SRH communication practice. This was done because behaviour development occurs over time following several training sessions. There was congruence between this study and existing literature with regards to the methods of intervention-delivery used, which include role play, discussion, games and lectures. As for the studies done in SSA, these methods are believed to help the participants to develop communication skills and help them realise the importance of confronting cultural norms and taboos in protecting their children from reproductive health problems . The current study findings have important theoretical and practical implications for refining parent-child SRH communication interventions; however, there are some limitations that are worth mentioning and taken into account in the interpretation of the findings. One limitation of this study is that the intervention was held for only 2 days, this duration was not adequate for participants to process and practice the skills learned and to totally change the unhelpful cultural and religious norms that had been practiced for so many years. To overcome this, effort was made to ensure each participant had an opportunity to practice the skills more than once during the intervention. A series of six-three-hour sessions weekly-intervention is recommended as what was done in Family Matter Project-Tanzania whereby participants had more opportunity to practice at home the skills learned and then share the outcomes with their peers on their return . Secondly, caretakers were asked retrospectively on their communication practice with their adolescents. Recall bias and social desirability could create response biases and may have lead to over or under reporting of communication practice. Future researches should consider to interview parent-child dyad so that the validity of parent’s response could be counterchecked and verified against child’s response. Thirdly, more women participated in the intervention compared to men. This may have lead to overestimation of communication prevalence among the participants. Therefore caution should be taken when interpreting the results. Future studies should explore more about male perceptions towards SRH communication with adolescents since male involvement is as equally important as that of female in influencing adolescents’ risky sexual behaviours. The findings provided preliminary evidence for the effectiveness of SRH communication intervention and supported the significance of IMB model-constructs to inform the SRH-communication intervention and to guide the intervention evaluation. The study has found out that caretakers can adopt the role of SRH educators if they are provided with necessary support even in areas where cultural norms discourage such communication. IMB-based SRH communication intervention should be considered the panacea to empower the community so that parents could teach their adolescent children about SRH. The practical implication of this study in informing future interventions is that community and family interactions shape individual behavior and are crucial to understand to better meet the population needs. Therefore one should look beyond the individual, both community and family level factors are salient in shaping caretakers’ cognitions about sexual health communication with their children. Furthermore; the provision of more time, follow-up session, and multisession-interventions are more likely to result in changes on social norms that have been practiced for many years and thus will improve SRH communication in a considerable amount. The theoretical implication of this study is that the development of parent-child communication on SRH issues has been highlighted in IMB-model which suggests that communication practice is related to the extent to which the person is informed, motivated and has behavioral skills to communicate. Therefore, and as suggested from the results, the study concludes that if caretaker perceives the risk, has supportive social norms, positive attitude, self-efficacy and communication skills, all these give him/her reason to communicate with adolescent about SRH issues. The success of DARAJA curriculum in Unguja holds promise for other intervention programmes in a range of settings with differing initial perceptions about discussing SRH with preadolescents.
Evidence for Endogenous Collagen in
1531a1bf-2fad-4cc1-8b03-f75eff3c0954
11822843
Biochemistry[mh]
Bone stability and the temporal decay of organic molecules is of interest in palaeontology, , archeology, and forensics. The state of decay can provide information regarding burial conditions, e.g. aerobic/anaerobic etc. and disease status. The bones of all vertebrate animals contain proteins including collagen which decay as a consequence of bio- and environmentally induced degradation post mortem . − In large animals, due to the bone size and initially high protein abundance, with modern techniques it is possible to identify and quantify protein remnants in ancient samples. A review of soft tissue preservation in palaeontological samples from different strata and locations reveals widespread occurrence (see Thomas and Taylor and references therein ). Using scanning electron microscopy (SEM), Pawlicki et al. in 1966 reported collagenous material in the phalange bone of a dinosaur from the Upper Cretaceous. In 1999 collagen fibers were reported in T. rex bone (Museum of the Rockies MOR 555) from the Hell Creek Formation using transmission electron microscopy (TEM). Attempts to identify residual hemoglobin and heme were inconclusive and this remains an active research area. The examination of another T. rex bone (MOR 1125) from the same formation using SEM revealed tissue flexibility which was unanticipated. Secondary ion mass spectrometry (SIMS) was later used and protein endogeneity was proposed. In 2008, multiple layers of collagenous fibers were reported in Psittacosaurus skin from the Lower Cretaceous Xixian Formation. Sauropodomorph embryos from the Lower Jurassic were assessed using synchrotron radiation Fourier transform infrared spectroscopy (SR-FTIR) which indicated the presence of amide and apatite peaks within woven embryonic bone tissue. Another study used FTIR, Raman and second harmonic generation (SHG) to confirm collagen in samples of modern, medieval, and ice-age bones. Histochemical and immunological evidence was concluded to support collagen type II presence in Hypacrosaurus stebingeri , from a duck-billed dinosaur (MOR 548) from the Upper Cretaceous. The authors argue that microbial contamination could be eliminated as the protein source, since microbes are incapable of producing collagen. Intercalating DNA staining was observed and the survival of endogenous nuclear material was suggested. Studies using Mass Spectrometry (MS) include Asara et al. They sequenced collagen fragments from a mastodon (MOR 605) and T.Rex (MOR 1125) using liquid chromatography tandem mass spectrometry (LC-MS/MS), concluding long-term stability of peptide bonds. This was followed in 2009 by time-of-flight (ToF)-secondary ion mass spectrometry (SIMS) study of Brachylophosaurus canadensis (MOR 2598) fossils. Hydroxyproline (Hyp, C 5 H 9 NO 3 ) was identified, a relatively rare amino acid but abundant in collagen. Further study on the same bone confirmed earlier findings and a further six collagen I peptides were sequenced. In 2009, a study of Edmontosaurus ( sp. ) using FTIR suggested the presence of amide-containing compounds (absorption peaks around 1650 cm –1 ) and pyrolysis gas chromatography (GC)-MS confirmed endogenous organics. Lee et al. published their evidence of preserved collagen I in a Jurassic sauropod Lufengosaurus using SR-FTIR. Using the same technique, Boatman et al. also showed strong amide I and amide II absorption bands in T. rex vessels, consistent with collagen presence. Scanning electron microscope (SEM) imaging showed a triple helix (consistent with fibrillar collagen). The above authors report preservation of original collagen over long time periods, detected by an array of techniques. However, the endogeneity of protein remnants in paleontological bones has been contested with some maintaining that all original (endogenous) proteins should long ago have been replaced by the process of mineralization and can no longer be found in situ . − In this paper we use attenuated total reflectance (ATR)-FTIR , and cross-polarized light microscopy (XPol) supplemented by two MS techniques to elucidate the question of collagen endogeneity in Edmontosaurus sp. fossil bone (UOL GEO.1). LC-MS/MS is used to identify hydroxyproline and enzymatic digestion followed by MS to yield partial amino acid sequences which are used in database searching to identify specific proteins. Samples and Preparation Herbivorous Edmontosaurus sp. (Hadrosauridae) sacrum bone fossils were excavated from the Upper Cretaceous zone of the Hell Creek Formation in Harding County, South Dakota, USA (45°.56″N, −103°.46″W) in 2019. A 20 kg sample from this duck-billed dinosaur fossil together with samples of the accompanying sediment was donated to, and accessioned at the repository of the Victoria Gallery & Museum of the University of Liverpool under UOL GEO.1. Motion photogrammetry was used to capture a digital 3D model of the Edmontosaurus sp. bone fossils prior to analysis (see Supplementary Table S1 ). For comparison and control a modern bone from a common turkey ( Meleagris gallopavo ), sourced from a local butcher and because it is often classed in the Archosauria, and pure Bovine tendon collagen (Sigma-Aldrich product #5162) were used. Small bone segments (in the order of a few grams) were dried in an oven at 60 °C for several hours in preparation for crushing (powderisation). The same analysis protocols were used for both samples. The samples were ground bone shards (cross sections of 1–3 mm thick, ) prepared using a mortar and pestle. The shards were cleaned with powdered bicarbonate and hot water (∼50 °C) before final rinsing with deionized water. The samples were ground to powder with particle sizes of no more than 50 μm [40]. A 50-μm stackable zooplankton sieve was used to filter the particles onto a freshly cut piece of aluminum foil for transfer into new vials, ready for LC-MS/MS analysis. FTIR FTIR was performed using an Attenuated Total Reflectance accessory (ATR) with a germanium window on a Bruker Vertex 70© equipped with a Deuterated Lanthanum α Alanine doped TriGlycine Sulfate (DLaTGS) detector. Each spectrum combined an average of 32 scans, with a resolution of between 2–4 cm –1 in the range of 4500 to 650 cm –1 . Spectra were collected and analyzed with OPUS software and compared with authentic Ca 3 PO 4 from the library (©Nicodom, 2014). Absorption maxima correspond to the moiety abundance in the sample absorbing the energy at a certain frequency. XPol Thin sections of UOL GEO.1 were prepared according to Chinsamy and Raath. Accordingly, polyvinyl acetate was used as the binding agent and applied to the bone-glass contact surface only. Thin sections were polished to thickness of 16 μm and imaged using a Motic Polarizing Microscope BA310 with a Sony ILCE-7RM4 detector. Images from several focal planes were collected then stacked using Photoshop 24.5. LC-MS/MS Bottom-Up Proteomics Twenty milligrams each of Edmontosaurus bone, turkey bone, and bovine collagen was dispensed into separate polypropylene microcentrifuge tubes. Each sample was treated with aqueous ammonium bicarbonate (AmBic, 80 μL, 25 mM) and RapiGest SF Surfactant solution (1% RapiGest solution in AmBic, 5 μL, Waters) with continuous gentle shaking (450 rpm, 80 °C, 10 min.). Cysteine reduction was then performed by the addition of dithiothreitol (DTT, 11.1 mg/mL in 25 mM AmBic, 5 μL). After mixing and incubation (60 °C, 10 min.) alkylation of free thiols was performed using iodoacetamide (46.6 mg/mL in 25 mM Ambic, 5 μL, 30 min in the dark). Excess iodoacetamide was quenched with DTT (4.7 μL as above), and samples were acidified (neat trifluoroacetic acid, 2 μL) to a pH of 2 or less (checked with pH indicator paper). Digestion was carried out with trypsin (Promega sequencing grade, 0.2 μg/μL in 50 mM aqueous acetic acid) with incubation (37 °C, 16 h). Following centrifugation (13,000 g , 15 min, 4 °C) the supernatants were transferred to clean microcentrifuge tubes and stored frozen until analysis. Samples were analyzed using nanobore reversed-phase chromatography (Ultimate 3000 RSLC, Thermo Scientific, Hemel Hempstead) coupled to a hybrid linear quadrupole/orbitrap mass spectrometer (Q Exactive HF Quadrupole-Orbitrap, Thermo Scientific) equipped with a nanospray ionization source. Samples (2 μL) were loaded onto the trapping column (Thermo Scientific, PepMap100, C18, 300 μm x 5 mm) equilibrated in aqueous formic acid (0.1%, v/v) using partial loop injection over 7 min at a flow rate of 12 μL/min. After the direction of eluent flow was reversed components were transferred to and resolved on an analytical column (Easy-Spray C18 75 μm x 500 mm, 2 μm particle size) equilibrated in 96.2% eluent A (water/formic acid, 100/0.1, v/v) and 3.8% eluent B (acetonitrile/water/formic acid, 79.95/19.95/0.1, v/v/v) and eluted (0.3 μL/min) with a linear increasing concentration of eluant B (min/% B; 0/3.8, 30/50). The mass spectrometer was operated in a data-dependent positive ion mode (fwhm 60,000 orbitrap full-scan, automatic gain control (AGC) set to 3e 6 ions, maximum fill time (MFT) of 100 ms). The seven most abundant peaks per full scan were selected for high energy collisional dissociation (HCD, 30,000 fwhm resolution, AGC 1e 5 , MFT 300 ms) with an ion selection window of 2 m / z and a normalized collision energy of 30%. Ion selection excluded singularly charged ions and ions with ≥ +6 charge state. A 60 s dynamic exclusion window was used to avoid repeated selection of the same ion for fragmentation. Survey analyses of each sample were first used to determine the sample amount, calculated by extrapolation, needed to give a full orbitrap scan base peak intensity (BPI) of 1–2 x10 9 . These analyses were performed with a compacted 15 min gradient ( Supplementary Table S2 ). Based on these BPI results, 2 μL of neat Edmontosaurus sample was used for the full analysis. The modern turkey sample was diluted 1:100 and Bovine collagen sample was diluted 1:1000 in water/acetonitrile/trifluoroacetic acid (97/3/0.1, v/v/v). Typically, one or two blanks would be run once finishing test runs. Here, four 30 min blank analyses (injection solvent only) were performed after the turkey and bovine samples to minimize carry over, then the fossilized sample was analyzed on the 1 h program. The blank (water/acetonitrile/formic acid, 97/3/0.1, v/v/v) was resolved on the analytical column (Easy-Spray C18 75 μm x 500 mm 2 μm particle size) equilibrated in 96.2% eluent A (water/formic acid, 100/0.1, v/v) and 3.8% eluent B (acetonitrile/water/formic acid, 79.95/19.95/0.1, v/v/v) and eluted (0.3 μL/min.) with a linear increasing concentration of eluant B (min/% B; 0/3.8, 15/50). Database Searches The data files were imported into PEAKS 11 (Bioinformatics Solutions Inc.) for searching the reviewed SwissProt database (569516 sequences), as well as the mixed—reviewed/unreviewed, one gene-one protein—UniCow (23841 sequences), UniTurkey (16212 sequences) and UniChick (18369 sequences) databases (all downloaded 05–04–23). The search parameters included cysteine carbamidomethylation, methionine oxidation, variable lysine and proline oxidation, a precursor mass tolerance of 10 ppm, a product mass tolerance of 0.01 Da, and a maximum of one missed cleavage. This software permits database searching for multiple post translation modifications (PTMs). The Edmontosaurus sample was searched against the SwissProt database, bovine collagen (96%) was searched against UniCow, and the modern turkey sample was searched against both UniChick and UniTurkey databases. The contaminants (cRAP) database was also included in each search. LC-MS/MS of Hydroxyproline Bone samples from the Edmontosaurus and modern turkey, and pure bovine collagen samples, were simultaneously processed and analyzed. After being frozen with liquid nitrogen, both fossilized and turkey bone samples were manually crushed to a fine powder with a mortar and pestle. One-gram portions of the powdered bone and 5 mg of bovine pure collagen were dispensed into polypropylene microcentrifuge tubes, suspended in water (1 mL), mixed vigorously, sonicated in a bath sonicator (30 min.), centrifuged (2000g, 15 min.), and the supernatants transferred to new tubes. The extraction procedure was repeated on the pellet by adding methanol (1 mL) and the samples were mixed, sonicated, and centrifuged as above. The supernatants were pooled and reserved for future bottom-up proteomics. The pellets were then treated with HCl (2 mL, 6 N) and incubated (2 h, 60 °C) before the samples were dried in a vacuum centrifuge. The HCl treatment was repeated until the samples ceased effervescing after HCl addition, each time with drying in a vacuum centrifuge between acid treatments. The repeated HCl treatments are necessary to remove all carbonate from the samples prior to attempting protein hydrolysis. Residual carbonate would completely or partially neutralize the acid necessary for amide bond cleavage. Removal of all carbonate was judged to be complete when there was no effervescence of the samples after the addition of acid, and was checked with pH paper indicator to ensure the samples were strongly acidic before proceeding with the protein hydrolysis treatment. Generally, it took two or three such treatments before effervescing ceased. The final dried samples were treated again with HCl (500 μL, 6 N) and incubated (12 h, 120 °C) to effect protein hydrolysis. The samples were dried overnight in a vacuum centrifuge and then treated with n-butanolic HCl (300 μL, 3 N), incubated (2 h, 60 °C) to make the butyl esters, and dried again in a vacuum centrifuge. Lastly, the samples were reconstituted in water (200 μL), mixed vigorously, and centrifuged (5 min, 16,000g, room temperature). The supernatants were transferred to HPLC vials and aliquots (typically 10 μL), injected onto a reversed-phase HPLC column (Phenomenex Kinetex, 2.6 μm Polar C18, 100 Å, 100 × 2.1 mm), equilibrated in eluant A, and eluted (100 μL/min) with a stepwise linearly increasing concentration of eluant C (acetonitrile/formic acid, 100/0.1, v/v; min/%C, 0/1, 5/1, 20/25, 22/1, 60/1). The effluent from the column was passed through an electrospray ionization (ESI) source (spray voltage 4.5 kV) connected to a hybrid linear ion trap/orbitrap mass spectrometer (Thermo Scientific Orbitrap LTQ XL) scanning in the positive ion mode. For the collection of ion trap mass spectra, the following instrument parameters were used: sheath gas flow rate 30 (arbitrary units), auxiliary gas flow rate 5 (arbitrary units), capillary temperature 300 °C, spray voltage 4,500 V, capillary voltage 22 V, tube lens voltage 110 V. For the collection of orbitrap mass spectra (accurate m / z measurements) immediately after calibration with LTQ ESI Positive Ion calibration solution mix, the same ESI settings were used with the following mass spectrometer parameters: mass range normal, fwhm resolution 100,000, scan range 50–1000 m / z . For the collection of ion trap fragment ion spectra of the butyl ester of hydroxyproline (Hyp be , MH + at m / z 188), the following instrument parameters were used: mass range normal, scan range 50–200 m / z , collision energy 35, activation time 30 ms. Data were collected and interrogated with instrument manufacturer-supplied software (Xcalibur 2.05). Control samples were included with each batch of bone samples. These were negative control samples devoid of added bone extracts (in triplicate), bovine collagen (20 mg/sample, in triplicate), and authentic Hyp standard in a range of amounts (typically 0, 2, 10, 20, and 50 nmol/sample, in duplicate). These samples were prepared and processed with each batch of bone samples. The order in which samples were analyzed was carefully arranged. Injections of water (solvent blanks) were used at the start of the analysis of each batch of samples to check that there were no peaks for Hyp be resulting from carry-over from the analysis of previous sample batches. After verification of LC/MS system cleanliness, a typical order of sample analysis was: negative control samples 1–3; water blank #1; Edmontosaurus fossilized bone samples 1–3, water blanks #2 and #3, turkey bone samples 1–3, water blanks #4 and #5, collagen samples 1–3, water blanks #6 and #7, Hyp be standards 1–10, water blanks #8–#10. The data from the standard Hyp be samples were processed by plotting the known amount of Hyp per sample against the measured chromatographic peak areas corresponding to the Hyp be peak. The trendline equation was then used to interpolate or extrapolate the amount of Hyp in each sample. Herbivorous Edmontosaurus sp. (Hadrosauridae) sacrum bone fossils were excavated from the Upper Cretaceous zone of the Hell Creek Formation in Harding County, South Dakota, USA (45°.56″N, −103°.46″W) in 2019. A 20 kg sample from this duck-billed dinosaur fossil together with samples of the accompanying sediment was donated to, and accessioned at the repository of the Victoria Gallery & Museum of the University of Liverpool under UOL GEO.1. Motion photogrammetry was used to capture a digital 3D model of the Edmontosaurus sp. bone fossils prior to analysis (see Supplementary Table S1 ). For comparison and control a modern bone from a common turkey ( Meleagris gallopavo ), sourced from a local butcher and because it is often classed in the Archosauria, and pure Bovine tendon collagen (Sigma-Aldrich product #5162) were used. Small bone segments (in the order of a few grams) were dried in an oven at 60 °C for several hours in preparation for crushing (powderisation). The same analysis protocols were used for both samples. The samples were ground bone shards (cross sections of 1–3 mm thick, ) prepared using a mortar and pestle. The shards were cleaned with powdered bicarbonate and hot water (∼50 °C) before final rinsing with deionized water. The samples were ground to powder with particle sizes of no more than 50 μm [40]. A 50-μm stackable zooplankton sieve was used to filter the particles onto a freshly cut piece of aluminum foil for transfer into new vials, ready for LC-MS/MS analysis. FTIR was performed using an Attenuated Total Reflectance accessory (ATR) with a germanium window on a Bruker Vertex 70© equipped with a Deuterated Lanthanum α Alanine doped TriGlycine Sulfate (DLaTGS) detector. Each spectrum combined an average of 32 scans, with a resolution of between 2–4 cm –1 in the range of 4500 to 650 cm –1 . Spectra were collected and analyzed with OPUS software and compared with authentic Ca 3 PO 4 from the library (©Nicodom, 2014). Absorption maxima correspond to the moiety abundance in the sample absorbing the energy at a certain frequency. Thin sections of UOL GEO.1 were prepared according to Chinsamy and Raath. Accordingly, polyvinyl acetate was used as the binding agent and applied to the bone-glass contact surface only. Thin sections were polished to thickness of 16 μm and imaged using a Motic Polarizing Microscope BA310 with a Sony ILCE-7RM4 detector. Images from several focal planes were collected then stacked using Photoshop 24.5. Twenty milligrams each of Edmontosaurus bone, turkey bone, and bovine collagen was dispensed into separate polypropylene microcentrifuge tubes. Each sample was treated with aqueous ammonium bicarbonate (AmBic, 80 μL, 25 mM) and RapiGest SF Surfactant solution (1% RapiGest solution in AmBic, 5 μL, Waters) with continuous gentle shaking (450 rpm, 80 °C, 10 min.). Cysteine reduction was then performed by the addition of dithiothreitol (DTT, 11.1 mg/mL in 25 mM AmBic, 5 μL). After mixing and incubation (60 °C, 10 min.) alkylation of free thiols was performed using iodoacetamide (46.6 mg/mL in 25 mM Ambic, 5 μL, 30 min in the dark). Excess iodoacetamide was quenched with DTT (4.7 μL as above), and samples were acidified (neat trifluoroacetic acid, 2 μL) to a pH of 2 or less (checked with pH indicator paper). Digestion was carried out with trypsin (Promega sequencing grade, 0.2 μg/μL in 50 mM aqueous acetic acid) with incubation (37 °C, 16 h). Following centrifugation (13,000 g , 15 min, 4 °C) the supernatants were transferred to clean microcentrifuge tubes and stored frozen until analysis. Samples were analyzed using nanobore reversed-phase chromatography (Ultimate 3000 RSLC, Thermo Scientific, Hemel Hempstead) coupled to a hybrid linear quadrupole/orbitrap mass spectrometer (Q Exactive HF Quadrupole-Orbitrap, Thermo Scientific) equipped with a nanospray ionization source. Samples (2 μL) were loaded onto the trapping column (Thermo Scientific, PepMap100, C18, 300 μm x 5 mm) equilibrated in aqueous formic acid (0.1%, v/v) using partial loop injection over 7 min at a flow rate of 12 μL/min. After the direction of eluent flow was reversed components were transferred to and resolved on an analytical column (Easy-Spray C18 75 μm x 500 mm, 2 μm particle size) equilibrated in 96.2% eluent A (water/formic acid, 100/0.1, v/v) and 3.8% eluent B (acetonitrile/water/formic acid, 79.95/19.95/0.1, v/v/v) and eluted (0.3 μL/min) with a linear increasing concentration of eluant B (min/% B; 0/3.8, 30/50). The mass spectrometer was operated in a data-dependent positive ion mode (fwhm 60,000 orbitrap full-scan, automatic gain control (AGC) set to 3e 6 ions, maximum fill time (MFT) of 100 ms). The seven most abundant peaks per full scan were selected for high energy collisional dissociation (HCD, 30,000 fwhm resolution, AGC 1e 5 , MFT 300 ms) with an ion selection window of 2 m / z and a normalized collision energy of 30%. Ion selection excluded singularly charged ions and ions with ≥ +6 charge state. A 60 s dynamic exclusion window was used to avoid repeated selection of the same ion for fragmentation. Survey analyses of each sample were first used to determine the sample amount, calculated by extrapolation, needed to give a full orbitrap scan base peak intensity (BPI) of 1–2 x10 9 . These analyses were performed with a compacted 15 min gradient ( Supplementary Table S2 ). Based on these BPI results, 2 μL of neat Edmontosaurus sample was used for the full analysis. The modern turkey sample was diluted 1:100 and Bovine collagen sample was diluted 1:1000 in water/acetonitrile/trifluoroacetic acid (97/3/0.1, v/v/v). Typically, one or two blanks would be run once finishing test runs. Here, four 30 min blank analyses (injection solvent only) were performed after the turkey and bovine samples to minimize carry over, then the fossilized sample was analyzed on the 1 h program. The blank (water/acetonitrile/formic acid, 97/3/0.1, v/v/v) was resolved on the analytical column (Easy-Spray C18 75 μm x 500 mm 2 μm particle size) equilibrated in 96.2% eluent A (water/formic acid, 100/0.1, v/v) and 3.8% eluent B (acetonitrile/water/formic acid, 79.95/19.95/0.1, v/v/v) and eluted (0.3 μL/min.) with a linear increasing concentration of eluant B (min/% B; 0/3.8, 15/50). The data files were imported into PEAKS 11 (Bioinformatics Solutions Inc.) for searching the reviewed SwissProt database (569516 sequences), as well as the mixed—reviewed/unreviewed, one gene-one protein—UniCow (23841 sequences), UniTurkey (16212 sequences) and UniChick (18369 sequences) databases (all downloaded 05–04–23). The search parameters included cysteine carbamidomethylation, methionine oxidation, variable lysine and proline oxidation, a precursor mass tolerance of 10 ppm, a product mass tolerance of 0.01 Da, and a maximum of one missed cleavage. This software permits database searching for multiple post translation modifications (PTMs). The Edmontosaurus sample was searched against the SwissProt database, bovine collagen (96%) was searched against UniCow, and the modern turkey sample was searched against both UniChick and UniTurkey databases. The contaminants (cRAP) database was also included in each search. Bone samples from the Edmontosaurus and modern turkey, and pure bovine collagen samples, were simultaneously processed and analyzed. After being frozen with liquid nitrogen, both fossilized and turkey bone samples were manually crushed to a fine powder with a mortar and pestle. One-gram portions of the powdered bone and 5 mg of bovine pure collagen were dispensed into polypropylene microcentrifuge tubes, suspended in water (1 mL), mixed vigorously, sonicated in a bath sonicator (30 min.), centrifuged (2000g, 15 min.), and the supernatants transferred to new tubes. The extraction procedure was repeated on the pellet by adding methanol (1 mL) and the samples were mixed, sonicated, and centrifuged as above. The supernatants were pooled and reserved for future bottom-up proteomics. The pellets were then treated with HCl (2 mL, 6 N) and incubated (2 h, 60 °C) before the samples were dried in a vacuum centrifuge. The HCl treatment was repeated until the samples ceased effervescing after HCl addition, each time with drying in a vacuum centrifuge between acid treatments. The repeated HCl treatments are necessary to remove all carbonate from the samples prior to attempting protein hydrolysis. Residual carbonate would completely or partially neutralize the acid necessary for amide bond cleavage. Removal of all carbonate was judged to be complete when there was no effervescence of the samples after the addition of acid, and was checked with pH paper indicator to ensure the samples were strongly acidic before proceeding with the protein hydrolysis treatment. Generally, it took two or three such treatments before effervescing ceased. The final dried samples were treated again with HCl (500 μL, 6 N) and incubated (12 h, 120 °C) to effect protein hydrolysis. The samples were dried overnight in a vacuum centrifuge and then treated with n-butanolic HCl (300 μL, 3 N), incubated (2 h, 60 °C) to make the butyl esters, and dried again in a vacuum centrifuge. Lastly, the samples were reconstituted in water (200 μL), mixed vigorously, and centrifuged (5 min, 16,000g, room temperature). The supernatants were transferred to HPLC vials and aliquots (typically 10 μL), injected onto a reversed-phase HPLC column (Phenomenex Kinetex, 2.6 μm Polar C18, 100 Å, 100 × 2.1 mm), equilibrated in eluant A, and eluted (100 μL/min) with a stepwise linearly increasing concentration of eluant C (acetonitrile/formic acid, 100/0.1, v/v; min/%C, 0/1, 5/1, 20/25, 22/1, 60/1). The effluent from the column was passed through an electrospray ionization (ESI) source (spray voltage 4.5 kV) connected to a hybrid linear ion trap/orbitrap mass spectrometer (Thermo Scientific Orbitrap LTQ XL) scanning in the positive ion mode. For the collection of ion trap mass spectra, the following instrument parameters were used: sheath gas flow rate 30 (arbitrary units), auxiliary gas flow rate 5 (arbitrary units), capillary temperature 300 °C, spray voltage 4,500 V, capillary voltage 22 V, tube lens voltage 110 V. For the collection of orbitrap mass spectra (accurate m / z measurements) immediately after calibration with LTQ ESI Positive Ion calibration solution mix, the same ESI settings were used with the following mass spectrometer parameters: mass range normal, fwhm resolution 100,000, scan range 50–1000 m / z . For the collection of ion trap fragment ion spectra of the butyl ester of hydroxyproline (Hyp be , MH + at m / z 188), the following instrument parameters were used: mass range normal, scan range 50–200 m / z , collision energy 35, activation time 30 ms. Data were collected and interrogated with instrument manufacturer-supplied software (Xcalibur 2.05). Control samples were included with each batch of bone samples. These were negative control samples devoid of added bone extracts (in triplicate), bovine collagen (20 mg/sample, in triplicate), and authentic Hyp standard in a range of amounts (typically 0, 2, 10, 20, and 50 nmol/sample, in duplicate). These samples were prepared and processed with each batch of bone samples. The order in which samples were analyzed was carefully arranged. Injections of water (solvent blanks) were used at the start of the analysis of each batch of samples to check that there were no peaks for Hyp be resulting from carry-over from the analysis of previous sample batches. After verification of LC/MS system cleanliness, a typical order of sample analysis was: negative control samples 1–3; water blank #1; Edmontosaurus fossilized bone samples 1–3, water blanks #2 and #3, turkey bone samples 1–3, water blanks #4 and #5, collagen samples 1–3, water blanks #6 and #7, Hyp be standards 1–10, water blanks #8–#10. The data from the standard Hyp be samples were processed by plotting the known amount of Hyp per sample against the measured chromatographic peak areas corresponding to the Hyp be peak. The trendline equation was then used to interpolate or extrapolate the amount of Hyp in each sample. After initial cleaning the fossilized Edmontosaurus sp. sacrum (UOL GE0.1) bone material weighed in total approximately 20 kg and the main fragment was intact , needing little stabilizing. The 3D model ( Supplementary Table S1 ) allows detailed inspection of the surface topography of the bone aiding surface identification of postdepositional breaks and geometric measurements. Photogrammetric analysis showed that UOL GEO.1 had residual integrity. Trabecular bone is visible to the eye and also by digital microscopy . FTIR The inorganic component of the assemblage of bone is composed of hydroxyapatite (bioapatite). Antisymmetric stretching of PO 4 occurs between approximate wavenumbers 1000–1100 cm –1 , depending on the dipole change of the moiety. , The FTIR spectra recorded for Edmontosaurus fossilized bone, turkey bone, and inorganic calcium phosphate all show a strong absorption around wavenumber 1050 cm –1 . The organic component of fresh bones comprises mostly type I collagen. An FTIR spectrum of collagen will show a band for amide I group (containing carbonyl, C = O) absorption around 1650 cm –1 . , The band visible around 1652 cm –1 in the modern turkey bone likely indicates the presence of collagenous protein. As expected, this absorption maximum is not evident in the spectrum obtained from calcium phosphate. However, neither is this amide I (nor amide II) band present in the Edmontosaurus sample, although a small carbonyl absorption band is just visible ( inset). The intensity ratio for carbonyl over phosphate (indicated by “CO/P”) is used as a proxy for collagen abundance , but does not guarantee that the carbonyl moiety is from collagen. For the turkey sample CO/P was 0.455 and 0.065 for UOL GE0.1. XPol Cross-polarized light, or “crossed-polar” microscopy has been used to image stained skin collagen for quantification of collagen density and unstained bone collagen. The architecture of bone lamellae can be observed under polarized light microscopy since bone is optically anisotropic (birefringent). It is the composite of collagen fibers and bioapatite crystallites in regular patterns that gives bone its birefringence. XPol images of UOL GEO.1 bone tissue revealed differing characteristics of color within two distinct microscopic regions. Hard-edged, angular green shapes are interpreted as calcite inclusions within osteonic lumens. However, a minority of regions that were once fresh bone tissue also show birefringence. Unlike the calcite inclusions that can occur in lumens, these regions occur within the bone matrix. They contain small, dark lacunae that once held osteons. Birefringence within formerly fresh bone appears reddish-gold under crossed polars ( B). With a first order red filter, XPol revealed gold-colored regions ( C inset and D) that turned blue-green when rotated over 90°. This birefringence characterizes the collagen-bioapatite crystallinity that pervades fresh bone. This appears to occur only in patches within the fossil. Two options present themselves to help interpret the observed birefringence. In one, collagen has decayed from all of the Edmontosaurus bone matrix. Since bioapatite crystallites rapidly disperse upon collagen degradation, some other cementing agent would have replaced the role of collagen in holding those crystallites in their original positions and pattern. This scenario would require the exogenous cementing agent, to replace the collagen only within the still-birefringent regions. These minerals would have permineralized pore spaces such as the osteonic lumens before penetrating only a minority of bone matrix. Three deficiencies with this diagenetic scenario emerge. First, the requirement of a cementing agent to move in place of collagen while maintaining the spatial positioning of bioapatite crystallites is unlikely, with randomization or indeed loss of crystallites a more likely outcome. Second, the water required to transport dissolved ions that precipitate into minerals would have facilitated degradative chemistry alongside physical dispersion and transport of original bone collagen and/or bioapatite. Lastly and equally unlikely, the replacement cementing mineral would exactly replicate original crystallite positioning so as to retain bone microstructures including lamellae (seen in other samples) and lacunae as seen in . A second option involves retention of sufficient original collagenous remnants to preserve crystallites in life position. The many published descriptions of biomolecular remnants in fossils strongly suggest that original protein may persist in Cretaceous bone. Indeed, at least ten reports describe remnant osteocytes liberated by dissolution from fossil bone, , − showing some preservation of original organics. A more parsimonious explanation for the regions within the extracellular matrix (ECM) that retain a degree of birefringence is that they retain remnants of original collagen sufficient to hold some crystallites in their original patterns. Accordingly, the regions within bone that have no birefringence would represent zones where collagen has completely decayed and thus where crystallites have dispersed. Turkey bone was artificially decayed at high temperature. Similar birefringence to fossil bone was observed under XPol with a first order red filter . No permineralization was observed in osteons, as expected, since our bone decay procedure did not include dissolved ions. In this case, almost-black areas remain dark after rotating the sample at an approximate right angle (105°), making them no longer birefringent. Bioapatite would have dispersed from these areas as high temperatures accelerated the collagen decay. However, microregions traded gold for blue and vice versa upon rotation, in a similar way to the fossil. These results are also consistent with the retention of collagen remnants in birefringent microregions in both artificially decayed turkey and actually decayed Edmontosaurus bone. LC-MS/MS Bottom-Up Proteomics 15 min survey analyses were used to determine the amount of each sample required to obtain comparable signal intensities. The base peak intensity (BPI) for 30 min chromatograms of the Edmontosaurus and turkey bone samples are shown in along with authentic bovine collagen spectra. The samples were digested with trypsin. The resulting fragments were ionised with a nanospray ionization source (nESI). Ions with charge between +2 and +5 were filtered to enter the mass spectrometer, which they do at different times depending on their retention affinity with the chromatography column, before being mass analyzed by the detector . The resulting ion masses are then compared with those in existing databases as described in the database searching methods section. There is an overall similarity between the chromatograms for the turkey and the Edmontosaurus samples and a close match between the retention times for the highest peaks (at 20.321 for turkey and 20.685 for Edmontosaurus ) and those peaks immediately preceding and following. The difference in retention times is possibly due to differing bone matrices having different binding affinities to the column. Analysis of the data sets revealed six collagen-derived peptides in the Edmontosaurus sample, with errors ranging from 1.3–3.6 ppm and oxidized at a minimum of one position (always on proline, i.e., hydroxyproline). The ppm error is calculated by the difference between the measured mass and theoretical (database) mass divided by the theoretical mass. All these sequences correspond to peptides reported in the SwissProt database for Brachylophosaurus canadensis , another duck-billed dinosaur that together with Edmontosaurus is classed in the Hadrosauridae family. Five of the sequences are from the collagen alpha-1(I) chain (length: 113 amino acids, mass: 9664 Da), in total accounting for covering 73.45% of the entire sequence (positions 19–33 and 64–78 absent). The remaining detected peptide ( m / z 805.38074) accounts for 50% coverage of the collagen alpha-2(I) chain (length: 36 amino acids, mass: 3122 Da), with positions 19–36 unaccounted. Three sequences (rows 1, 4, and 5 of ) are also reported for a T. rex sample from the Hell Creek formation. Most of the PTMs on these Edmontosaurus sequences are oxidation, but we also note deamidation of N (Asparagine) and Q (Glutamine) amino acids: in 85/150 peptides and 55/95 peptides, respectively. , Deamidation for our turkey sample is significantly lower (32/440 peptides). Supplementary Table S3 lists a total of at least 41 discovered collagen sequences from UOL GEO.1. It also lists PTMs for the first eight accessions and refers to the raw data (login details are on page 1 of the Supporting Information . See also Figure S4 caption). These matched a mixture of extant and extinct fauna found in the SwissProt database. The most abundant of these matched peptides were from domestic chicken ( Gallus gallus ), followed by American mastodon ( Mammut americanum ). In addition, peptides from other proteins were detected in the Edmontosaurus sample. Over a hundred actin peptides were found. Furthermore, 61 hemoglobin, 158 histone, and 92 tubulin peptides were also detected. The Basic Local Alignment Search Tool (BLAST) analysis ( Supplementary Table S7 ) revealed that the collagen sequence does have genera-specific differences based on comparison with extant taxa and recurring sequence similarities based on functional constraints. There is also the question of variabilities caused through diagenesis, which may blur genera-specific residues. The similarity of the UOL GEO.1 sequence to both extant and extinct taxa are not therefore unexpected. Several sequences, including those from chicken ( Gallus gallus ) and dog ( Canis lupus familiaris ), returned sequence identity scores at least as high as hadrosaur ( Brachylophosaurus canadensis ). The question arises as to whether such results indicate contamination. Modern humans of course associate with chicken and dog, but not likely with rat ( Rattus norvegicus ) and not at all with mastodon ( Mammut americanum ), both of which also scored higher than B. canadensis . Since it is not possible for mastodon sequence to have entered the analysis stream via modern contamination, then alternate explanations for it and accompanying taxa should be sought. If previously tested samples had included our reported taxa, such a finding would provide evidence for contamination through holdover within instrumentation. However, fruit fly ( Drosophila melanogaster ) samples preceded our dinosaur, samples and no Drosophila peptide sequence matches occurred in our data set. However, different database algorithms can assign the same peptide sequences to different taxa. Furthermore, peptide sequences can be remarkably similar among widely different taxa, , even up to the phylum level. Thus, it is not unreasonable to expect and find almost identical sequences among differing taxa. The turkey bone data checked against the UniChick database ( Supplementary Table S4 ) gave 55.33% coverage of collagen alpha-1(I) chain and 74.69% coverage of the alpha-2(I) chain. The UniTurkey database contains no matches for type I collagen ( Supplementary Table S5 ). No Brachylophosaurus peptides were found in the turkey bone data set (compared against UniChick and UniTurkey databases). A majority (185) of the 188 peptides found in bovine tendon (96% pure) collagen matched that of Bos taurus , accounting for 98.40% of the amino acid sequence ( Supplementary Table S6 ). LC-MS/MS of Hydroxyproline Under ESI conditions, Hyp be (butyl ester of hydroxyproline) produces an intense parent ion at m / z 188 (MH + ), which upon fragmentation under the specified MS/MS conditions, yields fragment ions at m / z 132 and 86. The former likely results from loss of the butyl side chain (C 4 H 8 ) with a proton transfer to the CO group: thus, (C 5 H 9 NO 3 + H) + would be the corresponding elemental composition. Further neutral loss of 46 u (H 2 CO 2 ) through proton transfer and cleavage of the acid group would create the m / z 86 fragment (C 4 H 8 NO + ). Following injection of the derivatized bone samples, reconstructed ion traces for m / z 188–132 and 188–86 transitions consistently showed significant peaks at or near the retention of authentic Hyp be ( , top two traces). Despite all precautions taken, the negative controls always revealed a peak for Hyp. However, the intensity of this peak was significantly less than what was obtained for the bone samples . A cochromatography experiment was performed because the Hyp analyses were marked by a greater than usual variation in retention times. After analyzing the individual samples, the dinosaur bone and authentic Hyp be samples were mixed, and the result from the mixed sample revealed a single sharp peak. This cochromatography experiment showed the material in dinosaur bone was chromatographically indistinguishable from Hyp be . Furthermore, the ratios of the peak areas for the two transitions (188–132, 188–86) from authentic Hyp be and the peak from derivatized bone extract were the same . Further testing of the validity of the assertion that the chromatographic peak assigned as Hyp be in the fossilized samples came from accurate mass measurements completed with the orbitrap as the chromatographic detector. The measured mass of the parent ion from the fossilized bone sample was recorded at m / z 188.12802, that is within 0.5 ppm of the theoretical calculated value (calculated for C 9 H 1 7NO 3 + H + , 188.12812), and this lies within the instrument tolerance. Finally, the Hyp be concentration in the fossilized Edmontosaurus bone sample was estimated by interpolating peak areas for the 188–132 transition using a standard curve constructed from a series of samples of decreasing Hyp be concentration. The peak areas in the standard reference curves covered the range of peak areas in Negative controls and the Edmontosaurus bone samples, so the quantitation in these cases was achieved by interpolation from the standard curve. The Hyp peak areas in the Turkey bone and Bovine collagen samples were outside the range of the standard curves, so in these cases quantitation was achieved by extrapolation. Hyp be was reliably detected and quantified in six separate 1 mg samples taken from the same bone specimen, with concentrations ranging from 6.7 to 41.7 nmoles of Hyp be /gram of bone , a result reflecting an uneven distribution of collagen within the fossilized sample consistent with XPol imaging which also revealed an uneven distribution of bone collagen within microscopic fossil bone regions. The inorganic component of the assemblage of bone is composed of hydroxyapatite (bioapatite). Antisymmetric stretching of PO 4 occurs between approximate wavenumbers 1000–1100 cm –1 , depending on the dipole change of the moiety. , The FTIR spectra recorded for Edmontosaurus fossilized bone, turkey bone, and inorganic calcium phosphate all show a strong absorption around wavenumber 1050 cm –1 . The organic component of fresh bones comprises mostly type I collagen. An FTIR spectrum of collagen will show a band for amide I group (containing carbonyl, C = O) absorption around 1650 cm –1 . , The band visible around 1652 cm –1 in the modern turkey bone likely indicates the presence of collagenous protein. As expected, this absorption maximum is not evident in the spectrum obtained from calcium phosphate. However, neither is this amide I (nor amide II) band present in the Edmontosaurus sample, although a small carbonyl absorption band is just visible ( inset). The intensity ratio for carbonyl over phosphate (indicated by “CO/P”) is used as a proxy for collagen abundance , but does not guarantee that the carbonyl moiety is from collagen. For the turkey sample CO/P was 0.455 and 0.065 for UOL GE0.1. Cross-polarized light, or “crossed-polar” microscopy has been used to image stained skin collagen for quantification of collagen density and unstained bone collagen. The architecture of bone lamellae can be observed under polarized light microscopy since bone is optically anisotropic (birefringent). It is the composite of collagen fibers and bioapatite crystallites in regular patterns that gives bone its birefringence. XPol images of UOL GEO.1 bone tissue revealed differing characteristics of color within two distinct microscopic regions. Hard-edged, angular green shapes are interpreted as calcite inclusions within osteonic lumens. However, a minority of regions that were once fresh bone tissue also show birefringence. Unlike the calcite inclusions that can occur in lumens, these regions occur within the bone matrix. They contain small, dark lacunae that once held osteons. Birefringence within formerly fresh bone appears reddish-gold under crossed polars ( B). With a first order red filter, XPol revealed gold-colored regions ( C inset and D) that turned blue-green when rotated over 90°. This birefringence characterizes the collagen-bioapatite crystallinity that pervades fresh bone. This appears to occur only in patches within the fossil. Two options present themselves to help interpret the observed birefringence. In one, collagen has decayed from all of the Edmontosaurus bone matrix. Since bioapatite crystallites rapidly disperse upon collagen degradation, some other cementing agent would have replaced the role of collagen in holding those crystallites in their original positions and pattern. This scenario would require the exogenous cementing agent, to replace the collagen only within the still-birefringent regions. These minerals would have permineralized pore spaces such as the osteonic lumens before penetrating only a minority of bone matrix. Three deficiencies with this diagenetic scenario emerge. First, the requirement of a cementing agent to move in place of collagen while maintaining the spatial positioning of bioapatite crystallites is unlikely, with randomization or indeed loss of crystallites a more likely outcome. Second, the water required to transport dissolved ions that precipitate into minerals would have facilitated degradative chemistry alongside physical dispersion and transport of original bone collagen and/or bioapatite. Lastly and equally unlikely, the replacement cementing mineral would exactly replicate original crystallite positioning so as to retain bone microstructures including lamellae (seen in other samples) and lacunae as seen in . A second option involves retention of sufficient original collagenous remnants to preserve crystallites in life position. The many published descriptions of biomolecular remnants in fossils strongly suggest that original protein may persist in Cretaceous bone. Indeed, at least ten reports describe remnant osteocytes liberated by dissolution from fossil bone, , − showing some preservation of original organics. A more parsimonious explanation for the regions within the extracellular matrix (ECM) that retain a degree of birefringence is that they retain remnants of original collagen sufficient to hold some crystallites in their original patterns. Accordingly, the regions within bone that have no birefringence would represent zones where collagen has completely decayed and thus where crystallites have dispersed. Turkey bone was artificially decayed at high temperature. Similar birefringence to fossil bone was observed under XPol with a first order red filter . No permineralization was observed in osteons, as expected, since our bone decay procedure did not include dissolved ions. In this case, almost-black areas remain dark after rotating the sample at an approximate right angle (105°), making them no longer birefringent. Bioapatite would have dispersed from these areas as high temperatures accelerated the collagen decay. However, microregions traded gold for blue and vice versa upon rotation, in a similar way to the fossil. These results are also consistent with the retention of collagen remnants in birefringent microregions in both artificially decayed turkey and actually decayed Edmontosaurus bone. 15 min survey analyses were used to determine the amount of each sample required to obtain comparable signal intensities. The base peak intensity (BPI) for 30 min chromatograms of the Edmontosaurus and turkey bone samples are shown in along with authentic bovine collagen spectra. The samples were digested with trypsin. The resulting fragments were ionised with a nanospray ionization source (nESI). Ions with charge between +2 and +5 were filtered to enter the mass spectrometer, which they do at different times depending on their retention affinity with the chromatography column, before being mass analyzed by the detector . The resulting ion masses are then compared with those in existing databases as described in the database searching methods section. There is an overall similarity between the chromatograms for the turkey and the Edmontosaurus samples and a close match between the retention times for the highest peaks (at 20.321 for turkey and 20.685 for Edmontosaurus ) and those peaks immediately preceding and following. The difference in retention times is possibly due to differing bone matrices having different binding affinities to the column. Analysis of the data sets revealed six collagen-derived peptides in the Edmontosaurus sample, with errors ranging from 1.3–3.6 ppm and oxidized at a minimum of one position (always on proline, i.e., hydroxyproline). The ppm error is calculated by the difference between the measured mass and theoretical (database) mass divided by the theoretical mass. All these sequences correspond to peptides reported in the SwissProt database for Brachylophosaurus canadensis , another duck-billed dinosaur that together with Edmontosaurus is classed in the Hadrosauridae family. Five of the sequences are from the collagen alpha-1(I) chain (length: 113 amino acids, mass: 9664 Da), in total accounting for covering 73.45% of the entire sequence (positions 19–33 and 64–78 absent). The remaining detected peptide ( m / z 805.38074) accounts for 50% coverage of the collagen alpha-2(I) chain (length: 36 amino acids, mass: 3122 Da), with positions 19–36 unaccounted. Three sequences (rows 1, 4, and 5 of ) are also reported for a T. rex sample from the Hell Creek formation. Most of the PTMs on these Edmontosaurus sequences are oxidation, but we also note deamidation of N (Asparagine) and Q (Glutamine) amino acids: in 85/150 peptides and 55/95 peptides, respectively. , Deamidation for our turkey sample is significantly lower (32/440 peptides). Supplementary Table S3 lists a total of at least 41 discovered collagen sequences from UOL GEO.1. It also lists PTMs for the first eight accessions and refers to the raw data (login details are on page 1 of the Supporting Information . See also Figure S4 caption). These matched a mixture of extant and extinct fauna found in the SwissProt database. The most abundant of these matched peptides were from domestic chicken ( Gallus gallus ), followed by American mastodon ( Mammut americanum ). In addition, peptides from other proteins were detected in the Edmontosaurus sample. Over a hundred actin peptides were found. Furthermore, 61 hemoglobin, 158 histone, and 92 tubulin peptides were also detected. The Basic Local Alignment Search Tool (BLAST) analysis ( Supplementary Table S7 ) revealed that the collagen sequence does have genera-specific differences based on comparison with extant taxa and recurring sequence similarities based on functional constraints. There is also the question of variabilities caused through diagenesis, which may blur genera-specific residues. The similarity of the UOL GEO.1 sequence to both extant and extinct taxa are not therefore unexpected. Several sequences, including those from chicken ( Gallus gallus ) and dog ( Canis lupus familiaris ), returned sequence identity scores at least as high as hadrosaur ( Brachylophosaurus canadensis ). The question arises as to whether such results indicate contamination. Modern humans of course associate with chicken and dog, but not likely with rat ( Rattus norvegicus ) and not at all with mastodon ( Mammut americanum ), both of which also scored higher than B. canadensis . Since it is not possible for mastodon sequence to have entered the analysis stream via modern contamination, then alternate explanations for it and accompanying taxa should be sought. If previously tested samples had included our reported taxa, such a finding would provide evidence for contamination through holdover within instrumentation. However, fruit fly ( Drosophila melanogaster ) samples preceded our dinosaur, samples and no Drosophila peptide sequence matches occurred in our data set. However, different database algorithms can assign the same peptide sequences to different taxa. Furthermore, peptide sequences can be remarkably similar among widely different taxa, , even up to the phylum level. Thus, it is not unreasonable to expect and find almost identical sequences among differing taxa. The turkey bone data checked against the UniChick database ( Supplementary Table S4 ) gave 55.33% coverage of collagen alpha-1(I) chain and 74.69% coverage of the alpha-2(I) chain. The UniTurkey database contains no matches for type I collagen ( Supplementary Table S5 ). No Brachylophosaurus peptides were found in the turkey bone data set (compared against UniChick and UniTurkey databases). A majority (185) of the 188 peptides found in bovine tendon (96% pure) collagen matched that of Bos taurus , accounting for 98.40% of the amino acid sequence ( Supplementary Table S6 ). Under ESI conditions, Hyp be (butyl ester of hydroxyproline) produces an intense parent ion at m / z 188 (MH + ), which upon fragmentation under the specified MS/MS conditions, yields fragment ions at m / z 132 and 86. The former likely results from loss of the butyl side chain (C 4 H 8 ) with a proton transfer to the CO group: thus, (C 5 H 9 NO 3 + H) + would be the corresponding elemental composition. Further neutral loss of 46 u (H 2 CO 2 ) through proton transfer and cleavage of the acid group would create the m / z 86 fragment (C 4 H 8 NO + ). Following injection of the derivatized bone samples, reconstructed ion traces for m / z 188–132 and 188–86 transitions consistently showed significant peaks at or near the retention of authentic Hyp be ( , top two traces). Despite all precautions taken, the negative controls always revealed a peak for Hyp. However, the intensity of this peak was significantly less than what was obtained for the bone samples . A cochromatography experiment was performed because the Hyp analyses were marked by a greater than usual variation in retention times. After analyzing the individual samples, the dinosaur bone and authentic Hyp be samples were mixed, and the result from the mixed sample revealed a single sharp peak. This cochromatography experiment showed the material in dinosaur bone was chromatographically indistinguishable from Hyp be . Furthermore, the ratios of the peak areas for the two transitions (188–132, 188–86) from authentic Hyp be and the peak from derivatized bone extract were the same . Further testing of the validity of the assertion that the chromatographic peak assigned as Hyp be in the fossilized samples came from accurate mass measurements completed with the orbitrap as the chromatographic detector. The measured mass of the parent ion from the fossilized bone sample was recorded at m / z 188.12802, that is within 0.5 ppm of the theoretical calculated value (calculated for C 9 H 1 7NO 3 + H + , 188.12812), and this lies within the instrument tolerance. Finally, the Hyp be concentration in the fossilized Edmontosaurus bone sample was estimated by interpolating peak areas for the 188–132 transition using a standard curve constructed from a series of samples of decreasing Hyp be concentration. The peak areas in the standard reference curves covered the range of peak areas in Negative controls and the Edmontosaurus bone samples, so the quantitation in these cases was achieved by interpolation from the standard curve. The Hyp peak areas in the Turkey bone and Bovine collagen samples were outside the range of the standard curves, so in these cases quantitation was achieved by extrapolation. Hyp be was reliably detected and quantified in six separate 1 mg samples taken from the same bone specimen, with concentrations ranging from 6.7 to 41.7 nmoles of Hyp be /gram of bone , a result reflecting an uneven distribution of collagen within the fossilized sample consistent with XPol imaging which also revealed an uneven distribution of bone collagen within microscopic fossil bone regions. The discovery of soft tissue in dinosaur remains has been controversial due to conflicting explanations of contamination versus endogeneity for the observed results. , , In an attempt to look for contamination, our novel approach attempts to quantify hydroxyproline. The hypothesis being that if prior techniques used to characterize fossil bone collagen were subject to false positives via contamination, e.g., recent noncollagen, collagen look-alikes, or by residual collagen trapped within instrumentation, , then it would be less likely to detect and quantify Hyp. However, in multiple runs taken from separate sample extracts, it was possible to quantify Hyp. This result is more consistent with the hypothesis of preserved collagenous remnants and at the same time makes claims of contamination more difficult to defend. Another novelty of our methodology is that by quantifying, as well as sequencing, a sense may be gained for how decayed the collagen is. If the sequenced collagen is contamination by recent sources, then the sequence would be largely complete (not having been around long enough for partial or severe chemical decay) and the yields would be relatively closer to that of fresh bone. Instead, we find short sequence fragments and lower Hyp yields, both independently consistent with ancient and decayed, not modern, collagen. To date, collagen sequence data has been published from limb bones, i.e., T. rex and B. canadensis femurs. Here for the first time, we show independent, partial matches to those sequences but extracted from a sacrum. A further novelty is that, to date, no researchers have used protein sequencing in combination with XPol. Until now, XPol has been used to visualize collagen in fresh bone, not ancient bone. This is possibly due to the expectation that mineralization during diagenesis would have replaced the birefringent property of ancient collagenated bone. For the first time XPol results showing collagen-like birefringence in combination with collagen sequence for dinosaur establishing XPol as another tool to characterize fossil bone collagen. Previous studies using FTIR of Jurassic and Cretaceous dinosaurs have shown the organic amide I group around the absorption band 1650 cm –1 and the phosphate vibrations representing apatite are found between 960–1100 cm –1 . , , , Here such results are only confirmed in modern turkey bone. In completely mineralized bone, the CO maximum would be indistinguishable from the baseline absorbance. In the Edmontosaurus , the CO absorption maximum is above baseline with a CO/P ratio of 0.065 cm –1 , consistent with residual organic material, but not collagen conclusively. Microscopic regions within UOL GEO.1 retain birefringence characteristic of the original collagen/bioapatite bone constituents. They also resemble similar regions within artificially decayed Meleagris bone. Areas that appear dark and purple are no-longer birefringent and surround the birefringent regions. If high temperatures were extended in the Meleagris experiment, birefringence would eventually give way to nonbirefringent regions, consistent with biochemical decay of bone collagen and its subsequent release and randomization of bioapatite crystallites. Therefore, it is possible that the nonbirefringence seen in much of the field-of-view corresponds to decayed bone collagen in both the artificially and actually decayed (fossil) bone samples. The similarity between birefringent patterns for Meleagris and Edmontosaurus bone are consistent with the concept that endogenous collagenous remnants with sufficient integrity to hold enough bioapatite crystallites in their original, regular arrangements continue to cause birefringence in both these samples of bone tissue. However, since XPol cannot directly report molecular data, any conclusions regarding molecular preservation require independent analyses to confirm collagen remnants. If confirmed independently in the same fossil bone sample by e.g. MS analysis, then XPol offers the possibility of both describing and spatially mapping microscopic collagen regions and decay patterns in ancient bone. Hydroxyproline (Hyp) is found—but uncommon—in few proteins other than collagen, but comprises somewhere between 4–10% of collagen residues in extant vertebrates. The presence of hydroxyproline in the fossils is consistent with a collagenous origin. Because collagen is by far the most abundant protein in bone tissue, Hyp is targeted as an indicator of collagen and our results verify the presence of Hyp in acid-digested samples. This otherwise unusual amino acid constitutes 9.6, 7.8, and 4.0 residue percent of collagen from rat, bovine, and codfish, respectively. In our experiments for the first time, the amount of collagen in Mesozoic dinosaur bone is quantified by singling out authentic Hyp, in samples of known provenance. MS is the preferred method for protein identification providing unparalleled sensitivity and specificity. , MS indicated the presence of organic materials (alpha chains) in T. rex . At that time these were not referenceable against a database with dinosaurian peptides. Later, Brachylophosaurus data also yielded peptide chains and the presence of Hyp. , This paper is the first wholly independent confirmation of these previous conclusions via similar results in Edmontosaurus UOL GEO.1, howbeit on a relatively limited data set. Our bottom-up proteomic analyses revealed a total of at least 41 collagen polypeptides ( Supplementary Table S3 ). The focus of the results in this paper is on the peptides assigned to Brachylophosaurus alpha-1 and alpha-2 helices. It is probable that similar taxa ( Brachylophosaurus and Edmontosaurus ) had many proteins in common. For instance, two collagen alpha-1 (I) chain peptides (residue assignments 1–18 and 79–95) found in the Edmontosaurus were also discovered in the Brachylophosaurus both with modifications on the same prolines. In total, five revealed polypeptides are assigned to Brachylophosaurus canadensis collagen alpha-1 (I) helix; a sixth one belongs to the collagen alpha-2 (I) helix of the same. These peptides, some unique to dinosaur, can therefore be regarded as confirmation of original endogenous collagen rather than contamination from any extant creature. The scarcity of post translational modifications (PTMs) is evidence of exceptional preservation for UOL GEO.1. Detection of soft tissues (e.g., proteins) in fossil bones is a growing field of study and this paper contributes to the list of such findings. Corroborating results from a novel combination of three independent analytical techniques are presented which taken together provide experimental evidence for the conclusion that collagenous protein remnants in some dinosaur bones are original (endogenous) to the fossils and thus providing further evidence addressing this long-standing controversy in the scientific literature.
Is social media really impacting urogynecology?
8dcae2b1-6083-44c4-a59b-bcd89dd00c0c
7260448
Gynaecology[mh]
Origins and Evolution of Social Medicine and Contemporary Social Medicine in Korea
f533da2c-c695-4343-80f9-bde6f0e4f603
5495682
Preventive Medicine[mh]
Social medicine course is offered at many medical schools in European countries . Some medical schools in the US have an academic department named social medicine or social medicine in combination with another discipline, such as the Department of Global Health and Social Medicine at Harvard University Medical School, the Department of Family and Social Medicine at the Albert Einstein College of Medicine, and the Department of Social Medicine at the University of North Carolina at Chapel Hill Medical School. And there are medical schools that have the Department of Preventive and Social Medicine in numerous countries, including New Zealand, Malaysia, Thailand, Myanmar and India. The first edition of The Social Medicine Reader , edited by the faculty members of the Department of Social Medicine at the University of North Carolina at Chapel Hill Medical School, was published in 1997 , and the second edition, published in 2005, was expanded into three volumes . These facts indicate that social medicine is recognized as a specialty of medicine in many countries. In Korea, however, the medical community seems to be hardly aware of that there is a medical specialty named social medicine. A paper on an overview of the historical development of social medicine in the 19th-century Germany is the only published material about social medicine written in Korean . The fact that the Korean medical community is not aware of social medicine does not imply that nothing about social medicine is dealt with in the medical schools or none of the approaches based on social medicine is employed in the health care system. The evolution of social medicine has been internationally diverse, so that its concerns and subject matters may vary to some extent among different national contexts . Based upon these observations, this paper will discuss the state of social medicine in Korea following a review of the literature on the origins and evolution of social medicine. In doing so, this study is aimed at the objectives as follows: 1) To improve the understanding of the medical profession about social medicine in Korea by providing the description of its origins and development; 2) To assess the current state of social medicine in Korea and suggest agendas for its future development. With the rapid industrialization and urbanization at the turn of the 19th century, European countries faced many of the social problems, including increased low-wage workers, poor working conditions, lack of housing and sanitation facility. Diseases and deteriorating health conditions among industrial workers and in the low-income population were also serious. Under these circumstances, a group of reformist French physicians and hygienists conducted surveys and statistical studies about the relationships between health problems and social conditions . Furthermore, the first 30 years of the 1800s mark the development of modern clinical medicine to replace classical medicine. French physicians realized that many traditional therapeutic techniques were ineffective and, as an alternative, directed attention to hygiene and the influence of social factors on health and disease . Presumably, in addition to such health problems and state of medicine, the zeitgeist in the time of social revolution had made reformist physicians conceive of social medicine. The term ‘social medicine’ was first used in 1848, when French Revolution took place in February. In March of the same year, when revolutionary hopes were still running high, Dr. Jules Guérin used the term writing in Gazette Médicale de Paris . In that writing, he appealed to the French medical profession to act for the public good and to help create new society expected from the revolution . Guérin argued that the goal could be effectively achieved if knowledge and information regarding the relationships among medical issues, social factors and public affairs were systematically integrated into the framework of social medicine. In Germany, a group of medical doctors and others led by Salomon Neumann, Rudolf Virchow and Rudolf Leubuscher promoted health care reform after the revolution in March 1848 . They fully understood the effect of social factors on health problems. Virchow was a pathologist who provided empirical data supporting the argument that social conditions are important factors in the outbreak of an epidemic. His report, produced in 1848, on the typhus epidemic in the Upper Silesia region of Prussia is considered as a classic in the history of social medicine . People are simultaneously biological and social organisms, and thus human health and disease are affected by social factors as well as by biological factors. Included in the basic idea and concept of social medicine is that the interdisciplinary program between medicine and social science would provide medicine with knowledge and skills needed to analyze the social causes of health and illness in the same way as the alliance between medicine and laboratory sciences had provided new insights into the biological, chemical and physical bases of disease . Rudolf Virchow and his colleagues proposed three basic principles regarding the academic and practical aspects of social medicine that were summarized by Rosen as follows: 1) the health of the population is a matter of direct social concern; 2) social and economic conditions have an important effect on health, disease and the practice of medicine, and these relations must be subjected to scientific investigation; and 3) steps must be taken to promote health and to combat disease, and the measures involved in such action must be social as well as medical. These principles are retained until now, without fundamental changes, even while being adapted to different societies and conditions over an extended period of time . Although social medicine was initiated in France and Germany around the same period, its theory was more actively developed in Germany. The literature on social medicine appeared during the period from 1900 to 1920 in Germany is extensive . Probably, for this reason, Rudolf Virchow is commonly considered as the founder of social medicine . The theory of social medicine developed in Germany had a wide influence on the development of this field in many other European countries . Many medical schools in these countries have retained a commitment to its foundational ideas from the early stage to the present day. For example, a study of the curricula of 32 medical schools in 18 European countries conducted in 2002 revealed that over half of the schools were offering social medicine courses . Social medicine was introduced to Latin America and the US in the 20th century. Social medicine in Latin America was at its prime in the 1930s when Salvador Allende, who later became the president of Chile, was central in promoting the field . In the US, interest grew in social medicine, and discussion of the topic was popular during the period after the end of World War II . For instance, the New York Academy of Medicine hosted an academic conference on social medicine in the spring of 1947 and published the report of the proceedings . In November of the same year, the Milbank Memorial Fund held a roundtable discussion on social medicine . Thereafter, the American medical community avoided using the term social medicine for a substantial period of time. The reason for the avoidance was that the phrase ‘social medicine’ sounded very much like ‘socialized medicine’ and the concept incorporated the politically suspect idea of national health system. By the early 1950s, the American social medicine movement lost its momentum during the red scare of what is known as the era of McCarthyism . It seems that the term social medicine was no longer considered taboo in mid-1960s. In a survey of American scholars in the fields of preventive medicine, community medicine and public health, conducted during the period from August 1965 to March 1966, it was found that the majority of respondents preferred social medicine as the name of their field of study . Papers on social medicine continued to be published, although not many, discussions on social medicine education began, and practical changes took place as well . Recently, on April 30, 2016, the Social Medicine Consortium composed of individuals, universities and organizations striving for equity in health held a symposium on social medicine at the University of Minnesota, exemplifying the current perception of and interest in social medicine in the US . Most of established academic disciplines have some common institutional arrangements, such as courses on the disciplinary subject offered by an autonomous organizational unit at colleges or universities and an academic society for the discipline. From early in the 20th century, social medicine began to become institutionalized as an academic discipline, and the institutionalization had been expedited around the end of World War II . The University of Vienna began to offer a social medicine course in 1909, and the University of Zagreb in Croatia appointed a faculty member of social medicine in 1931. In the UK, the appointment of the first chair of social medicine by Oxford University in 1943 provided a great stimulus to social medicine as an academic discipline. Some two years later, the University of Edinburgh, the University of Birmingham and Trinity College Dublin appointed a faculty member of social medicine . The Interim Report of The Royal College of Physicians of London, 1943, recommended that every medical school should establish a Department of Social and Preventive Medicine and made recommendations on how the subject should be taught . In 1956, the Society for Social Medicine was established . According to Rosen , at least until the early 1970s, the content of courses offered by a Department of Preventive Medicine in American medical schools were essentially the same as that offered by a Department of Social Medicine in British medical schools. The history of the Department of Social Medicine at the medical school of the University of North Carolina at Chapel Hill exemplifies the traditional relationship between social medicine and preventive medicine. The Department, originated from the Department of Preventive Medicine in 1952, has kept its current name since 1980 after going through a few instances of reorganization and renaming. Furthermore, the department is responsible for the resident training program for preventive medicine now . The majority of the medical schools in India have the Department of Preventive and Social Medicine upon the recommendation made at a medical education conference in 1955 . In addition, as mentioned earlier, many medical schools across the world, including those in New Zealand, Malaysia, Thailand and Myanmar, have the Department of Preventive and Social Medicine. The main medical interventions in modern health care are based on biomedical sciences and technologies that have been developed with advances in human biology, other natural sciences and engineering. New effective biomedical interventions are continuously developed, so that increasingly more diseases can be prevented and treated. However, the fundamental limitations of biomedical interventions should not be overlooked. As described before, health and disease are affected by social factors as well as by biological factors. For example, people may suffer from preventable communicable diseases due to unsanitary living conditions of slum area and people may die from curable diseases because of delay in seeking adequate medical services due to financial burden. Although the direct cause of their suffering and death was disease, the underlying cause was poverty which is not a biomedical problem. Generally speaking, the social causes of, experiences of and response to diseases and other health problems do not belong to the domain of biomedical science or intervention. Furthermore, many problems in health care associated with the increasing effectiveness and value of medical services, changes in the pattern of illnesses, aging of population and continuous increase in health expenditure are more social than medical. Advancements in medicine and the development of modern health care changed major causes of morbidity and mortality from infectious to chronic and degenerative diseases. In response to such changes in patterns of disease, health policy focused on changing health behavior and promoting healthy lifestyle. From the 1960s, social medicine also increasingly concentrated on relations between health, illness and social behavior . But empirical studies revealed the limitation of a model of prevention that primarily focused on changing individual behavior , and therefore policy and research interest was shifted to addressing the social structural determinants of health and disease. Recently, policy efforts give added emphasis on developing approaches directed to social determinants of health as concern with health inequalities is increased . Social medicine explicitly investigates social determinants of health and disease, rather than treating such determinants as mere background to biomedical phenomena . In line with this perspective of social medicine, Link and Phelan argued that epidemiological studies should pay greater attention to basic social conditions questioning the emphasis on such individually-based risk factors as diet, cholesterol level, exercise and the like. They indicated two reasons for this claim. One of their argument is that individually-based factors must be contextualized to craft effective interventions to improve population health. The other is that social factors such as socioeconomic status and social support are likely fundamental causes of disease. Eisenberg more specifically argued that the distribution of health and disease in human populations reflects where people live, what they eat, the work they do, the air and the water they consume, their activity, their interconnectedness with others and the status they occupy in the social order. Holtz et al. also indicated that each of the risk of exposure, host susceptibility, course of disease and disease outcome is shaped by the social matrix, whether the disease is labeled infectious, genetic, metabolic, malignant, or degenerative. Both of the papers provided illustrations of the social roots of diseases. Although infectious diseases are clearly caused by biological factors, the patterns and duration of the infection vary according to the characteristics of population, such as size, structure, density, their utilization of health care services and living conditions . By definition, an infectious agent is a necessary cause of the disease. Eliminating the agent eliminates the disease. But it is not a sufficient cause, for not every person exposed to the agent develops clinical disease. The resistance of the host is as decisive as the virulence of the agent. Moreover, the epidemiology of infectious diseases is affected by human organizations as well as by the characteristics of the infectious agent. For example, the penetration of an infectious agent, which is virulent and infectious only in acute phase, into a small community would rapidly kill or immunize so high a proportion of the population that the agent is no longer able to propagate itself. On the other hand, in big cities, such agents have a large enough reservoir to maintain the chain of transmission. And social stratification is to be made in large communities, and disease epidemiology begins to correspond to the stratification. The change in the prevalence of type 2 diabetes (NIDDM) among the people of Nauru, a small island in the South Pacific, is a good example of the relationship between socioeconomic factors and diabetes . Until World War II, the main job of Nauruans was fishing and farming for subsistence which required high energy expenditure. After the war, introduction of phosphate mining by foreign companies yielded rental income for Nauruans that rapidly transformed them into wealthy and sedentary people. Virtually all foodstuffs were imported, and most had a high calorie content; obesity became ubiquitous. NIDDM, previously minimal, began to reach epidemic proportions in the 1950s, and in the late 1990s, afflicted almost two-thirds of 55-year-old to 64-year-old adults. The distribution of the disease among Nauruans has continued to change during the past 50 years. Health surveys revealed that the age standardized prevalence of impaired glucose tolerance rose to 21% in the mid-1970s and then declined to half that value by the late 1980s; yet, the risk factors persisted. According to Eisenberg , the plausible explanation for the rise and subsequent fall is that NIDDM resulting from the affluent lifestyle has already afflicted most of the genetically susceptible Nauruans, leaving a residual population of relatively resistant individuals. Neel has proposed the “thrifty genotype” hypothesis to explain the epidemiological changes in diabetes, like those observed in Nauru. In a situation where there is a fluctuating food supply and frequent famines, greater fat stores would be helpful for surviving subsequent periods of starvation. Individuals with thrifty adaptations (i.e., those able to release insulin rapidly when a temporary food glut becomes available) can convert most of their ingested calories into fat. The very same genotype becomes a handicap in the presence of abundant highcalorie foodstuffs and reduced physical activity. This hypothesis indicates that social conditions, through interaction with genotype, can influence the distribution of diseases in a population. The prevalence of heart disease and diabetes is two to three times higher in African Americans than in whites, but representative surveys of Caribbean populations of African origin have revealed prevalence rates two to five times lower than those of blacks in America or Britain. This suggests that racial disparities in health status observed in the US are associated with social contexts rather than with biological attributes including genotype . The Center for Interdisciplinary Health Disparities Research at the University of Chicago (CIHDR) proposed a downward causal model or a multilevel causal model of the mechanism through which social factors cause diseases and influence health outcomes . According to the model, upstream determinants at the social and environmental levels influence and regulate events at lower levels, that is, from individual behavior and physiology to the cellular and genetic interactions with health and disease. And feedback also occurs from lower to higher levels, with genetic and biological factors, influencing phenomena above them. In the US, despite the fact that white women are more likely to develop breast cancer, black women are more likely to die from it. Through the study of this disparity, CIHDR illustrated the applicability of the model for understanding the causal role of certain social factors in developing diseases. Several empirical studies on the effects of social factors on health and disease were briefly reviewed. These studies indicate the inherent social basis of disease causation that is part of the basic concept and theory of social medicine. And they provide some rationale for Einsenberg’s claim that all medicine is inescapably social . Since Japanese medicine was influenced by German medicine, it is probable that social medicine was known in Korea during the period of Japanese rule. Hong-Jong Yoo, a Korean physician, used the term social medicine in an essay titled “Two major harms from the viewpoint of hygiene,” printed in the first issue of Gaebyuk published in 1920 , and the term appeared in newspapers around the time. But the extent to which social medicine was established as an academic discipline or as a specialty of medicine in Korea is not known. The term social medicine has been rarely used in Korea after the liberation from Japanese rule either. Exchange with American medicine, which became active from around the 1950s, was the driving force for the development of Korean medicine. But social medicine was not introduced, presumably because American medicine avoided using the term for a substantial period of time, especially for some years from the era of McCarthyism in early 1950s. However, some research papers, which considered social factors as part of study variables, used the term social medicine in the title like ‘socio-medical study.’ (There are papers titled “Social medicine” and “A study on the development of social medicine curriculum” , but their content is not about social medicine but about medical education.) The establishment of Institute of Social Medicine, Hallym University (The Institute is now named Health Services Research Center.) in 1984 was the first formal use of the term in Korea. In 1985, Hallym University College of Medicine established the Department of Social Medicine (literal translation of Korean name) instead of the Department of Preventive Medicine which is the common name used in Korean medical colleges. A few years later, two more newly founded colleges of medicine established the Department of Social Medicine. Their English name is the Department of Social and Preventive Medicine. As the reasons for using social medicine instead of or in combination with preventive medicine in those medical colleges, two points are indicated. First, fundamental knowledge and technologies for prevention are developed by all the medical specialties and most of preventive services for individuals are performed at the departments of clinical medicine, so that prevention cannot be monopolized by a certain specialty. In fact, prevention is the concern of all the medical specialties including basic medical sciences . Second, preventive medicine, as the title of specialty, does not reflect the fact that Korean preventive medicine deals with much broader content than prevention. At this point, we may ask a couple of questions. What is the relationship between social medicine and preventive medicine? Can the use of the term ‘social medicine’ help resolve the problems faced with the use of the term ‘preventive medicine’? In the textbook edited and published by the Korean Society for Preventive Medicine , preventive medicine is defined as one of medical specialties aimed to protect, maintain and promote health and well-being of individuals and groups of people, and to prevent disease, disability, and premature death. This definition implies that preventive medicine is distinguished from other medical specialties by its two characteristics, focus on prevention and concern with groups of people as well as individuals. Understanding of patterns of health and illness in groups of people and making interventions at the population level to improve their health require consideration of the effects of various social factors on health and health care delivery system. Therefore, the biomedical model of health and disease is not appropriate for dealing with many of the problems and issues involved in the research and practice of preventive medicine . These concepts of preventive medicine associated with its population perspective to health and disease are the very basic ideas and concepts of social medicine and the term ‘social medicine’ apparently reflects such concepts better than the term ‘preventive medicine’. In fact, once the two terms were often used interchangeably in America , perhaps on the basis of such commonality. In the light of the conceptual commonality, it is understandable that Korean preventive medicine deals with many of the subject matters of social medicine. A quick observation of the subjects of the aforementioned textbook published by the Korean Society for Preventive Medicine is made to confirm that they include those of social medicine. The subjects of the book are grouped into four parts: ‘Health and Disease’ (part I); ‘Epidemiology and Its Applications’ (part II); ‘Environment and Health’ (part III); and ‘Health Care Services and Management’ (part IV). In describing the concepts related to preventive medicine and public health in part I, socioeconomic, cultural and political factors are considered as part of the determinants of health. This perspective to health is in agreement with the concepts of social medicine. Epidemiology, discussed in part II, is used as a methodology in social medicine as well. Besides, epidemiology of most of diseases are affected by social factors as well as by biological factors. The subjects of part III, environmental pollution, environmental contamination and occupational diseases are also closely associated with social and economic conditions. Health care delivery system, health insurance and health behavior included as the subjects of part IV are also important issues for social medicine. Even through a quick fragmentary observation, it is found that Korean preventive medicine incorporates a great deal of social medicine contents. In Korea, the term social medicine is rarely used but many of its subject matters are incorporated into preventive medicine as reflected in a text book. But the implicit incorporation of fragmentary contents of social medicine without any discussion of its concepts and theory may be of little help for understanding even the basics of social medicine, such as the need for and the significance of investigating the effects of social conditions on health, disease and health care. Therefore, efforts should be made to supplement social medicine contents of preventive medicine through formalizing linkages between the two fields. One way of doing so is to change the title of ‘preventive medicine’ course in medical colleges to ‘preventive and social medicine,’ as in many other countries and adjust the contents of teaching and textbook. It is believed that this change will be also helpful for clearly defining the academic and practical identity of preventive medicine. It was observed that social medicine is recognized as a specialty of medicine in many countries. The Korean medical community, however, does not seem to be aware of that social medicine is one of medical specialties, but it does not mean that nothing about social medicine is dealt with in medical colleges or none of social medicine approaches is employed in health care services. Since social medicine has evolved in diverse ways in different countries, the main concerns and subject matters of teaching and research may differ to some extent among countries. Based upon these observations, this paper is intended: 1) to improve medical profession’s understanding of social medicine in Korea through providing the description of its origins and development; and 2) to assess the current state of social medicine in Korea and suggest agendas for its future development. Included in the core principles of social medicine are: 1) that social and economic conditions have an important effect on health, disease and the practice of medicine, and these relations must be subjected to scientific investigation; and 2) that the measures to promote health and combat disease must be social as well as medical. Interests in the relationships between health and social factors began in the 18th century, but the term ‘social medicine’ was first used in 1848 by a French doctor, Jules Guérin in the year of the February Revolution in France. In the same year, Rudolf Virchow and his colleagues initiated social medicine in Germany. Social medicine initiated in France and Germany had a wide influence on the development of this field in many other European countries. Interest in social medicine grew and discussion of the topic was popular in the US for some time after World War II. However, the American medical profession avoided using the term for a substantial period of time. The reason for the avoidance was that the phrase ‘social medicine’ sounded very much like ‘socialized medicine’ and the concept incorporated the politically suspect idea of national health system. By the early 1950s, the American social medicine movement lost its momentum during the red scare of what is known as the era of McCarthyism. Presumably because of the avoidance of the term by American medicine, which has widely influenced Korean medicine, social medicine has not been introduced to Korea. Korean preventive medicine is distinguished from other medical specialties by its two characteristics, focus on prevention and concern with groups of people as well as individuals. Understanding of patterns of health and illness in groups of people and making interventions at the population level to improve their health require consideration of the effects of various social factors on health, disease, and health care delivery system. In other words, Korean preventive medicine deals with, besides prevention, health problems at the population level that are inescapably social. These concepts of preventive medicine associated with its population perspective to health and disease are the very basic ideas and concepts of social medicine. In Korea, the term social medicine is rarely used but many of its subject matters are included in preventive medicine as reflected in a textbook. But it is not likely that further systematic development of social medicine would be made because there has never been any academic discussion of the concepts and theory of social medicine, upon which such development can be based. Indication is that efforts should be made to supplement social medicine contents of preventive medicine through formalizing the linkages between preventive medicine and social medicine. One way of doing so is to change the title of ‘preventive medicine’ course in medical colleges to ‘preventive and social medicine’ like in many other countries, and adjust the course contents in accordance with the new title.
NTN4 as a prognostic marker and a hallmark for immune infiltration in breast cancer
22281a59-6f3d-459b-9dc3-98b8969ce011
9217917
Anatomy[mh]
Netrins belong to a conserved laminin-like secreted protein family, originally identified as axon-guiding molecules . Netrins are expressed in ectopic nervous system, involved in a variety of biological processes, including tissue morphogenesis , angiogenesis , lymphangiogenesis , tumorigenesis , migration , invasion , adhesion , apoptosis and inflammation . Netrins are highly conserved during evolution. Netrin1 (NTN1), netrin3 (NTN3) and netrin4 (NTN4) have been identified in mammals. Netrin-4 (NTN4, also known as β-netrin) is a new member of Netrins family in vertebrates, localized to the basement membrane surrounding lobular structures in the blood vessels, kidneys, breasts and ovaries . NTN4 is secreted by breast epithelial cells and sequestered by the basement membrane. NTN4 was highly expressed in invasive breast adenocarcinoma . NTN4 may participate in the development and progression of a variety of cancers, NTN4 was proposed to serve as a prognostic biomarker for breast cancer , . Based on these findings, we aim to explore if NTN4 affects the prognosis of breast cancer patients. Furthermore, no investigation has been focused on the relationship of NTN4 with tumor microenvironment (TME) of breast cancer. Whether NTN4 expression is associated with immune infiltration in the TME or clinical outcome remains undetermined. In early 2021, the World Health Organization (WHO) International Agency for Research on Cancer agency (IARC) published 2020’s global cancer data ( https://www.iarc.fr/faq/latest-global-cancer-data-2020-qa/ ). Breast cancer (BC) has replaced lung cancer as the most common malignancy globally, with an estimated 2.26 million cases annually worldwide, which ranks the first in both morbidity and mortality among women. Breast cancer is a serious threat to human health. Continuous development of molecular markers specific to celluar subsets and targeted therapies will become an important research direction in the future. In recent decades, prognostic predictive value of mRNA expression has become increasingly attractive. Transcriptome of primary breast tumors can help predict intrinsic subtypes, tumor grades, drug response, risk of recurrence, and survival – . Here, for the first time, we have provided supporting evidence for relationship between NTN4 and immune infiltration as well as clinical outcome. In this study, association between NTN4 mRNA and breast cancer prognosis was evaluated by using public databases such as Kaplan–Meier plotter and PrognoScan. In addition, associations of NTN4 mRNA levels with clinicopathological characteristics and tumor-infiltrating immune cells were investigated in breast cancer. Meanwhile, gene alterations, molecular functions and regulation pathways of NTN4 were explored. Our findings shed light on the role of NTN4 in breast cancer and provided potential interaction between NTN4 and the TME. Tissue samples Three pairs of tissue from breast cancer patients were analyzed. The criteria for tissue included an original histological diagnosis of invasive breast carcinoma, and the efficiency of clinical pathological data. Specimens were frozen in liquid nitrogen (− 80 °C) for analysis. The study was conducted in accordance with the Declaration of Helsinki and was approved by The Ethical Committee of Liaocheng People’s Hospital and each patient provided informed consent. This study was performed according to the REMARK guidelines. TIMER database analysis We analyzed relationship of NTN4 gene with respective abundance of infiltrating immune cells (B cells, CD4 + T cells, CD8 + T cells, neutrophils, macrophages, and dendritic cells (DC)) in breast cancer patients using The Tumor IMmune Estimation Resource (TIMER) algorithm database ( https://cistrome.shinyapps.io/timer/ ) . Tumor purity is a vital factor that influences immune infiltration in tumor samples by genomic approaches. RNA-sequencing data and bioinformatics analysis NTN4 was preliminarily proposed as a prognostic marker for breast cancer patients, however, it has not been verified in the cohorts with large sample sizes. This is the first study to confirm the clinical significance of NTN4 in breast cancer in large cohorts. METABRIC database accurately analyzed breast cancer subtypes. The Cancer Genome Atlas (TCGA) database was applied to collect RNA-seq data and clinical information from 1098 cases of breast cancer. The original format of downloaded data was level 3 HTSeq-fragments per kilobase per million (FPKM) and converted into transcripts per million (TPM) for subsequent analysis. Paired and unpaired tests were performed to compare expression patterns of NTN4. Area under curve (AUC) of NTN4 was analyzed, to determine whether NTN4 can be used as a biomarker to distinguish between tumor and adjacent tissues. These analyses were conducted with R software. UALCAN database analysis The UALCAN database (ualcan.path.uab.edu/index.html) was applied to analyze relationships of NTN4 mRNA expression or NTN4 promoter methylation levels with clinicopathological characteristics . Survival Analysis using PrognoScan and Kaplan–Meier Plotter To investigate the prognostic value of NTN4 mRNA in breast cancer, Kaplan–Meier Plotter ( http://www.kmplot.com , P-value < 0.05) and PrognoScan database ( http://dna00.bio.kyutech.ac.jp/PrognoScan/ , adjust the threshold Cox P-value < 0.05) were applied. Specifically, NTN4 expression level was searched in all available microarray datasets of PrognoScan to determine its relationship with prognosis. We selected four datasets for analyzing NTN4 expression in breast cancer. The threshold was set as a Cox P-value < 0.05. Gene alterations of NTN4 in breast cancer using cBioPortal Gene alteration of NTN4 was explored using the cBioPortal ( http://www.cbioportal.org ) regarding BC. We selected Breast Invasive Carcinoma (TCGA, PanCancer Atlas) that contains 1084 samples to subsequent analyses. OncoPrint was constructed in the cBioPortal (TCGA provisional) to directly reflect all types of changes in NTN4 gene amplification, deep deletion, mRNA upregulation, and mRNA downregulation in patients with BC. In addition, potential effects of NTN4 gene alterations on survival of BC patients were estimated using Kaplan–Meier survival curves in the cBioPortal. STRING database For an in-depth exploration of relationship, the STRING database (version 11.0) was applied ( https://cn.string-db.org/cgi/network?taskId=bsL1tXI2yNb4&sessionId=b7FzfW5U0kqB ) . The STRING contains both known and predicted protein–protein associations based on bioinformatic resources, including curated databases, experimental/biochemical data, PubMed abstracts, and others. Using the NTN4 as an input parameter, the proteins that might interact with NTN4 were searched. The default scoring threshold of interaction was 0.4, and a subnetwork constructed with genes that might interact with NTN4 was extracted. The NTN4 driving genes and interactive genes were constructed into a network. Then, the STRING database was used to conduct gene ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses of all selected genes. Immunohistochemistry (IHC) The expression of NTN4 protein was verified by IHC. Tissue paraffin sections were dewaxed in xylene (Yantai fast eastern fine chemical CO., LTD) for three times, each for 10 min. Sections were followed by serial rehydration in graded ethanol (Yantai fast eastern fine chemical CO., LTD) from 100% ethanol followed by 95%, 90%, 80%, 70% and 60% ethanol, and finally in distilled water. Heat-mediated antigen retrieval was conducted in Ethylene Diamine Tetraacetic Acid (EDTA) buffer (pH 9.0) (MVS-0098; MXB) using a microwave pressure cooker for 10 min. They were blocked with 5% Bovine Serum Albumin (BSA) for 30 min at 37 °C, followed a mouse anti-NTN4 monoclonal antibody (sc-365280; Santa Cruz Biotechnology) at 1:100 dilution for the night at 4 °C. Sections were washed three times by Phosphate Buffer Saline (PBS) (pH 7.2) for 5 min. Binding of the anti-NTN4 antibody was detected using Biotin-conjugated secondary anti-mouse antibody from BOSTER detection system (SA1051) for 30 min at 37 °C, followed washed 3 times by PBS (pH 7.2) for 5 min. Next, Sections were incubated with SABC-AP (SA1051; BOSTER) for 30 min at 37 °C, followed washed four times by 0.01 M Tris Buffer Saline (TBS) (pH 9.0–9.5) for 5 min. And they developed with BCIP/NBT as the chromogen for 30 min. The sections were counterstained with Nuclear fast red (SA1051; BOSTER) for 5 min. For the determination of the NTN4 expression, three pathologists used a double-blind method to randomly select 3–5 high-power visual fields in order to determine the staining intensity and staining positive rate. Staining intensity score was defined as follows: 0 (negative), 1 (weak positive), 2 (moderate positive) and 3 (strong positive). Positive rate score was defined as follows: 0 (0%), 1 (1–25%), 2 (26–50%), 3 (51–75%), 4 (76–100%). Cumulative score = staining intensity × staining distribution. The cumulative score < 4 was considered as low NTN4 expression, whereas ≥ 4 as high NTN4 expression. Statistical analysis Wilcoxon test was used to compare different expression levels of NTN4 in different cancers. Kaplan–Meier plot was used to estimate survival curve. In order to describe survival curve more accurately, log rank test was used to calculate log rank P value. Univariate Cox regression model was applied to calculate hazard ratio (HR), 95% confidence intervals (CI) and Cox P values in PrognoScan. Spearman’s coefficient was used to analyze the correlation of gene expression. Using receiver operating characteristic (ROC) curve of NTN4, optimal cut-off point was calculated to distinguish “high” from “low” expression, and a ROC was generated with MedCalc in R version 4.0.2 ( https://www.r-project.org/ ). In the absence of special circumstances, a P < 0.05 was considered statistically significant. Three pairs of tissue from breast cancer patients were analyzed. The criteria for tissue included an original histological diagnosis of invasive breast carcinoma, and the efficiency of clinical pathological data. Specimens were frozen in liquid nitrogen (− 80 °C) for analysis. The study was conducted in accordance with the Declaration of Helsinki and was approved by The Ethical Committee of Liaocheng People’s Hospital and each patient provided informed consent. This study was performed according to the REMARK guidelines. We analyzed relationship of NTN4 gene with respective abundance of infiltrating immune cells (B cells, CD4 + T cells, CD8 + T cells, neutrophils, macrophages, and dendritic cells (DC)) in breast cancer patients using The Tumor IMmune Estimation Resource (TIMER) algorithm database ( https://cistrome.shinyapps.io/timer/ ) . Tumor purity is a vital factor that influences immune infiltration in tumor samples by genomic approaches. NTN4 was preliminarily proposed as a prognostic marker for breast cancer patients, however, it has not been verified in the cohorts with large sample sizes. This is the first study to confirm the clinical significance of NTN4 in breast cancer in large cohorts. METABRIC database accurately analyzed breast cancer subtypes. The Cancer Genome Atlas (TCGA) database was applied to collect RNA-seq data and clinical information from 1098 cases of breast cancer. The original format of downloaded data was level 3 HTSeq-fragments per kilobase per million (FPKM) and converted into transcripts per million (TPM) for subsequent analysis. Paired and unpaired tests were performed to compare expression patterns of NTN4. Area under curve (AUC) of NTN4 was analyzed, to determine whether NTN4 can be used as a biomarker to distinguish between tumor and adjacent tissues. These analyses were conducted with R software. The UALCAN database (ualcan.path.uab.edu/index.html) was applied to analyze relationships of NTN4 mRNA expression or NTN4 promoter methylation levels with clinicopathological characteristics . To investigate the prognostic value of NTN4 mRNA in breast cancer, Kaplan–Meier Plotter ( http://www.kmplot.com , P-value < 0.05) and PrognoScan database ( http://dna00.bio.kyutech.ac.jp/PrognoScan/ , adjust the threshold Cox P-value < 0.05) were applied. Specifically, NTN4 expression level was searched in all available microarray datasets of PrognoScan to determine its relationship with prognosis. We selected four datasets for analyzing NTN4 expression in breast cancer. The threshold was set as a Cox P-value < 0.05. Gene alteration of NTN4 was explored using the cBioPortal ( http://www.cbioportal.org ) regarding BC. We selected Breast Invasive Carcinoma (TCGA, PanCancer Atlas) that contains 1084 samples to subsequent analyses. OncoPrint was constructed in the cBioPortal (TCGA provisional) to directly reflect all types of changes in NTN4 gene amplification, deep deletion, mRNA upregulation, and mRNA downregulation in patients with BC. In addition, potential effects of NTN4 gene alterations on survival of BC patients were estimated using Kaplan–Meier survival curves in the cBioPortal. For an in-depth exploration of relationship, the STRING database (version 11.0) was applied ( https://cn.string-db.org/cgi/network?taskId=bsL1tXI2yNb4&sessionId=b7FzfW5U0kqB ) . The STRING contains both known and predicted protein–protein associations based on bioinformatic resources, including curated databases, experimental/biochemical data, PubMed abstracts, and others. Using the NTN4 as an input parameter, the proteins that might interact with NTN4 were searched. The default scoring threshold of interaction was 0.4, and a subnetwork constructed with genes that might interact with NTN4 was extracted. The NTN4 driving genes and interactive genes were constructed into a network. Then, the STRING database was used to conduct gene ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses of all selected genes. The expression of NTN4 protein was verified by IHC. Tissue paraffin sections were dewaxed in xylene (Yantai fast eastern fine chemical CO., LTD) for three times, each for 10 min. Sections were followed by serial rehydration in graded ethanol (Yantai fast eastern fine chemical CO., LTD) from 100% ethanol followed by 95%, 90%, 80%, 70% and 60% ethanol, and finally in distilled water. Heat-mediated antigen retrieval was conducted in Ethylene Diamine Tetraacetic Acid (EDTA) buffer (pH 9.0) (MVS-0098; MXB) using a microwave pressure cooker for 10 min. They were blocked with 5% Bovine Serum Albumin (BSA) for 30 min at 37 °C, followed a mouse anti-NTN4 monoclonal antibody (sc-365280; Santa Cruz Biotechnology) at 1:100 dilution for the night at 4 °C. Sections were washed three times by Phosphate Buffer Saline (PBS) (pH 7.2) for 5 min. Binding of the anti-NTN4 antibody was detected using Biotin-conjugated secondary anti-mouse antibody from BOSTER detection system (SA1051) for 30 min at 37 °C, followed washed 3 times by PBS (pH 7.2) for 5 min. Next, Sections were incubated with SABC-AP (SA1051; BOSTER) for 30 min at 37 °C, followed washed four times by 0.01 M Tris Buffer Saline (TBS) (pH 9.0–9.5) for 5 min. And they developed with BCIP/NBT as the chromogen for 30 min. The sections were counterstained with Nuclear fast red (SA1051; BOSTER) for 5 min. For the determination of the NTN4 expression, three pathologists used a double-blind method to randomly select 3–5 high-power visual fields in order to determine the staining intensity and staining positive rate. Staining intensity score was defined as follows: 0 (negative), 1 (weak positive), 2 (moderate positive) and 3 (strong positive). Positive rate score was defined as follows: 0 (0%), 1 (1–25%), 2 (26–50%), 3 (51–75%), 4 (76–100%). Cumulative score = staining intensity × staining distribution. The cumulative score < 4 was considered as low NTN4 expression, whereas ≥ 4 as high NTN4 expression. Wilcoxon test was used to compare different expression levels of NTN4 in different cancers. Kaplan–Meier plot was used to estimate survival curve. In order to describe survival curve more accurately, log rank test was used to calculate log rank P value. Univariate Cox regression model was applied to calculate hazard ratio (HR), 95% confidence intervals (CI) and Cox P values in PrognoScan. Spearman’s coefficient was used to analyze the correlation of gene expression. Using receiver operating characteristic (ROC) curve of NTN4, optimal cut-off point was calculated to distinguish “high” from “low” expression, and a ROC was generated with MedCalc in R version 4.0.2 ( https://www.r-project.org/ ). In the absence of special circumstances, a P < 0.05 was considered statistically significant. The mRNA expression and protein expression of NTN4 To evaluate NTN4 mRNA expression in pan-cancer, RNA sequencing data in TCGA was examined using TIMER. The differential NTN4 mRNA expression patterns between tumorous and adjacent tissues were summarized in Fig. A. NTN4 mRNA expression was significantly lower in invasive breast carcinoma (BRCA), as well as in basal, human epidermal growth factor receptor 2 (Her2 + ), and luminal breast cancer subtypes, compared with adjacent tissues. Meanwhile, expression of NTN4 in tumor was significantly lower than those in adjacent tissue in unpaired (Fig. B) and paired samples (Fig. C). In addition, ROC curve was used to analyze effectiveness of NTN4 mRNA expression level AUC on distinguishing breast cancer tissues from non-tumor tissues. The AUC of NTN4 was 0.764, suggesting that NTN4 could serve as a biomarker to distinguish BC from non-tumor tissue (Fig. D). NTN4 protein expression was verified by IHC in adjacent tissue and tumor tissue (Fig. E). Demographic and clinical characteristics of patients were summarized in Table , in which 1083 primary breast cancer cases were collected from TCGA database. According to relative NTN4 levels, breast cancer patients were divided into low (n = 541) and high (n = 542) expression groups. The associations between NTN4 expression levels and clinicopathological characteristics were evaluated. Chi-square tests revealed that NTN4 expression was associated with T stage (P < 0.001), Histological type (P < 0.001), Pathologic stage (P = 0.004), progesterone receptor (PR) and estrogen receptor (ER) status (P < 0.001). No significant correlation was observed between NTN4 expression and age (P = 0.035), M stage (P = 1.000), menopausal status (P = 0.916) or HER2 status (P = 0.438). Relationships between NTN4 mRNA expression and clinicopathological characteristics NTN4 mRNA expression level in breast cancer was explored using UALCAN database. Consistently, NTN4 mRNA expression level in BC tumor was significantly higher than in normal tissue (P < 0.001, Fig. A). In different molecular subtypes, luminal had higher mRNA expression of NTN4 than HER2 + and triple negative breast cancer (TNBC) (P < 0.001, Fig. B). Based on clinical stages, normal tissues had higher NTN4 mRNA expression than all stages of BC (Fig. C). In addition, four different stages of lymph node involvement had lower NTN4 mRNA expression than normal tissue (Fig. D). Relationships between NTN4 promoter methylation and clinicopathological characteristics Using UALCAN database, we explored if promoter methylation of NTN4 was related to clinicopathological characteristics of breast cancer patients. NTN4 promoter methylation level was significantly higher in primary tumor than in normal tissue (P < 0.001, Fig. A). Based on molecular subtypes, luminal and TNBC had higher levels of NTN4 promoter methylation (P < 0.001, Fig. B). Based on clinical stages, stage 2 and stage 3 had higher levels of NTN4 promoter methylation than stage 4 (Fig. C). Furthermore, based on lymph node status, N0 and N1 had higher levels of NTN4 promoter methylation than N3 (Fig. D). Thus, NTN4 promoter methylation may contribute to breast cancer development and progression. NTN4 mRNA level predicts prognosis in breast cancer Survival analysis of NTN4 mRNA expression was evaluated using PrognoScan (Supplementary Table ). Among four cohorts (GSE6532-GPL570, GSE1379, GSE3494-GPL97, GSE4922-GPL97) including different stages of breast cancer, high NTN4 expression was associated with favorable prognosis (Table ). Similar trend was observed, in Kaplan–Meier plotter database, based on Affymetrix microarrays. Notably, NTN4 significantly correlates with clinical outcome of breast cancer patients, including overall survival (OS), relapse-free survival (RFS), and distant metastasis-free survival (DMFS) (Fig. A–C, OS: HR (95% CI) 0.68 (0.52–0.89), P = 0.0047; RFS: HR (95% CI) 0.7 (0.67–0.82), P = 3.9e−06; DMFS: HR (95% CI) 0.68 (0.52–0.89), P = 0.0046, respectively). TP53 is a common mutation gene in breast cancer, and currently no drugs targeted TP53 are available. Therefore, we hypothesized that NTN4 modifies the prognosis in TP53 mutant subpopulation. This may be beneficial to improve the treatment strategy for patients with TP53 mutation. Interestingly, NTN4 significantly affected the OS of TP53 mutant patients (Fig. D, OS: HR (95% CI) 0.12 (0.01–0.93), P = 0.015). Therefore, it is conceivable that low NTN4 expression might be a risk factor for a poor prognosis in breast cancer patients. Correlation between NTN4 mRNA expression and six types of infiltrating immune cells Immune cells in the TME can affect a patient’s survival. Hence, it would be meaningful to explore the association between immune infiltration and NTN4 mRNA expression. We determined if NTN4 mRNA expression was related to immune infiltration in different cancers by calculating coefficient index of NTN4 mRNA expression with immune infiltration in breast cancer using the TIMER. Six types of infiltrating immune cells (B cells, CD4 + T cells, CD8 + T cells, neutrophils, macrophages, and DC) were explored. NTN4 expression was positively associated with CD8 + T cells (r = 0.117, P = 2.64e−04), macrophages (r = 0.247, P = 3.82e−15) and neutrophils (r = 0.07, P = 3.09e−02) in breast cancer whereas negatively with B cells (r = − 0.064, P = 4.62e−02) and tumor purity (r = − 0.187, P = 2.53e−09), but not DC (r = 0.004, P = 9.04e−01) (Fig. A). In different breast cancer subtypes, associations differed (Fig. B–D). In base-like subtype, NTN4 expression was not related to tumor purity (r = − 0.168, P = 5.71e−02), whereas related to macrophages only (r = 0.201, P = 2.38e−02). In HER2 + breast cancer, NTN4 expression level was not related to tumor purity (r = − 0.097, P = 1.67e−01), whereas only negatively related to CD8 + T cells (r = − 0.352, P = 7.27e−03). In luminal subtype, NTN4 expression level was negatively associated with tumor purity (r = − 0.258, P = 9.80e−10), whereas positively associated with B cells, CD8 + T cells, CD4 + T cells, macrophages, neutrophils, and DC. Correlation of NTN4 mRNA expression with markers of immune cells The potential relationship of NTN4 mRNA expression with infiltrating immune cells was explored using the TIMER. Immune cells characterized by cellular markers were deciphered, including B cells, CD8 + T cells, M1/M2 macrophages, tumor-associated macrophages (TAM), monocytes, natural killer cell (NK), neutrophils, and DC. Different functional T cells such as Tfh, Th1, Th2, Th9, Th17, Th22, Treg, and exhausted T cells were analyzed (Table ). In the TIMER, NTN4 mRNA expression levels were significantly related to 33 out of 45 immune cell markers after adjustment for tumor purity. Gene alterations in NTN4 in breast cancer tissue from cBioPortal Gene alterations in NTN4 were harbored in 1.1% of sequenced cases from OncoPrint schematic of cBioPortal (Fig. A). Among invasive breast carcinoma, no alteration in NTN4 was identified. Among Breast Invasive Ductal Carcinoma, amplification in NTN4 was common. Mutations and deep deletion occurred with an equal frequency. Among Breast Invasive Lobular Carcinoma, amplification and mutation of NTN4 occurred with an equal frequency as well. Amplification of NTN4 occurred in Breast Invasive Mixed Mucinous Carcinoma (Fig. B). All mutations of NTN4 in breast cancer were described in Fig. C. NTN4 harbored one truncating mutation and three missense mutations. Furthermore, relationship of NTN4 gene alteration with breast cancer patient survival was assessed. However, there was no significant relationship of gene alterations in NTN4 with OS, disease specific survival (DSS), disease free survival (DFS) and progress free survival (PFS) of breast cancer patients (Fig. D–G). Exploration of NTN4 molecular functions and regulation pathways based on bioinformation tools Potential molecular function and regulation pathway of NTN4 were preliminarily explored to demonstrate the potential mechanism underlying how NTN4 regulates biological behaviors of breast cancer. First, the STRING database was searched for genes that possibly interact with NTN4 (Fig. A). These selected genes were subjected to GO analysis to identify cellular component (CC) (Fig. B), biological process (BP) (Fig. C) and molecular function (MF) (Fig. D) in which NTN4 interacted genes were involved. Based on CC, differentially expressed proteins were extrinsic components of membrane. According to BP, differentially expressed proteins were mainly involved in morphogenesis and motility. Based on MF, differentially expressed proteins functioned mainly for signaling receptor binding. GO and KEGG pathway analysis was performed to identify molecular pathways in which NTN4 interacted genes were involved. The top 12 of enrichment analysis, such as extracellular matrix (ECM)-receptor interaction, adhesion and extracellular part, were presented in Fig. E. To evaluate NTN4 mRNA expression in pan-cancer, RNA sequencing data in TCGA was examined using TIMER. The differential NTN4 mRNA expression patterns between tumorous and adjacent tissues were summarized in Fig. A. NTN4 mRNA expression was significantly lower in invasive breast carcinoma (BRCA), as well as in basal, human epidermal growth factor receptor 2 (Her2 + ), and luminal breast cancer subtypes, compared with adjacent tissues. Meanwhile, expression of NTN4 in tumor was significantly lower than those in adjacent tissue in unpaired (Fig. B) and paired samples (Fig. C). In addition, ROC curve was used to analyze effectiveness of NTN4 mRNA expression level AUC on distinguishing breast cancer tissues from non-tumor tissues. The AUC of NTN4 was 0.764, suggesting that NTN4 could serve as a biomarker to distinguish BC from non-tumor tissue (Fig. D). NTN4 protein expression was verified by IHC in adjacent tissue and tumor tissue (Fig. E). Demographic and clinical characteristics of patients were summarized in Table , in which 1083 primary breast cancer cases were collected from TCGA database. According to relative NTN4 levels, breast cancer patients were divided into low (n = 541) and high (n = 542) expression groups. The associations between NTN4 expression levels and clinicopathological characteristics were evaluated. Chi-square tests revealed that NTN4 expression was associated with T stage (P < 0.001), Histological type (P < 0.001), Pathologic stage (P = 0.004), progesterone receptor (PR) and estrogen receptor (ER) status (P < 0.001). No significant correlation was observed between NTN4 expression and age (P = 0.035), M stage (P = 1.000), menopausal status (P = 0.916) or HER2 status (P = 0.438). NTN4 mRNA expression level in breast cancer was explored using UALCAN database. Consistently, NTN4 mRNA expression level in BC tumor was significantly higher than in normal tissue (P < 0.001, Fig. A). In different molecular subtypes, luminal had higher mRNA expression of NTN4 than HER2 + and triple negative breast cancer (TNBC) (P < 0.001, Fig. B). Based on clinical stages, normal tissues had higher NTN4 mRNA expression than all stages of BC (Fig. C). In addition, four different stages of lymph node involvement had lower NTN4 mRNA expression than normal tissue (Fig. D). Using UALCAN database, we explored if promoter methylation of NTN4 was related to clinicopathological characteristics of breast cancer patients. NTN4 promoter methylation level was significantly higher in primary tumor than in normal tissue (P < 0.001, Fig. A). Based on molecular subtypes, luminal and TNBC had higher levels of NTN4 promoter methylation (P < 0.001, Fig. B). Based on clinical stages, stage 2 and stage 3 had higher levels of NTN4 promoter methylation than stage 4 (Fig. C). Furthermore, based on lymph node status, N0 and N1 had higher levels of NTN4 promoter methylation than N3 (Fig. D). Thus, NTN4 promoter methylation may contribute to breast cancer development and progression. Survival analysis of NTN4 mRNA expression was evaluated using PrognoScan (Supplementary Table ). Among four cohorts (GSE6532-GPL570, GSE1379, GSE3494-GPL97, GSE4922-GPL97) including different stages of breast cancer, high NTN4 expression was associated with favorable prognosis (Table ). Similar trend was observed, in Kaplan–Meier plotter database, based on Affymetrix microarrays. Notably, NTN4 significantly correlates with clinical outcome of breast cancer patients, including overall survival (OS), relapse-free survival (RFS), and distant metastasis-free survival (DMFS) (Fig. A–C, OS: HR (95% CI) 0.68 (0.52–0.89), P = 0.0047; RFS: HR (95% CI) 0.7 (0.67–0.82), P = 3.9e−06; DMFS: HR (95% CI) 0.68 (0.52–0.89), P = 0.0046, respectively). TP53 is a common mutation gene in breast cancer, and currently no drugs targeted TP53 are available. Therefore, we hypothesized that NTN4 modifies the prognosis in TP53 mutant subpopulation. This may be beneficial to improve the treatment strategy for patients with TP53 mutation. Interestingly, NTN4 significantly affected the OS of TP53 mutant patients (Fig. D, OS: HR (95% CI) 0.12 (0.01–0.93), P = 0.015). Therefore, it is conceivable that low NTN4 expression might be a risk factor for a poor prognosis in breast cancer patients. Immune cells in the TME can affect a patient’s survival. Hence, it would be meaningful to explore the association between immune infiltration and NTN4 mRNA expression. We determined if NTN4 mRNA expression was related to immune infiltration in different cancers by calculating coefficient index of NTN4 mRNA expression with immune infiltration in breast cancer using the TIMER. Six types of infiltrating immune cells (B cells, CD4 + T cells, CD8 + T cells, neutrophils, macrophages, and DC) were explored. NTN4 expression was positively associated with CD8 + T cells (r = 0.117, P = 2.64e−04), macrophages (r = 0.247, P = 3.82e−15) and neutrophils (r = 0.07, P = 3.09e−02) in breast cancer whereas negatively with B cells (r = − 0.064, P = 4.62e−02) and tumor purity (r = − 0.187, P = 2.53e−09), but not DC (r = 0.004, P = 9.04e−01) (Fig. A). In different breast cancer subtypes, associations differed (Fig. B–D). In base-like subtype, NTN4 expression was not related to tumor purity (r = − 0.168, P = 5.71e−02), whereas related to macrophages only (r = 0.201, P = 2.38e−02). In HER2 + breast cancer, NTN4 expression level was not related to tumor purity (r = − 0.097, P = 1.67e−01), whereas only negatively related to CD8 + T cells (r = − 0.352, P = 7.27e−03). In luminal subtype, NTN4 expression level was negatively associated with tumor purity (r = − 0.258, P = 9.80e−10), whereas positively associated with B cells, CD8 + T cells, CD4 + T cells, macrophages, neutrophils, and DC. The potential relationship of NTN4 mRNA expression with infiltrating immune cells was explored using the TIMER. Immune cells characterized by cellular markers were deciphered, including B cells, CD8 + T cells, M1/M2 macrophages, tumor-associated macrophages (TAM), monocytes, natural killer cell (NK), neutrophils, and DC. Different functional T cells such as Tfh, Th1, Th2, Th9, Th17, Th22, Treg, and exhausted T cells were analyzed (Table ). In the TIMER, NTN4 mRNA expression levels were significantly related to 33 out of 45 immune cell markers after adjustment for tumor purity. Gene alterations in NTN4 were harbored in 1.1% of sequenced cases from OncoPrint schematic of cBioPortal (Fig. A). Among invasive breast carcinoma, no alteration in NTN4 was identified. Among Breast Invasive Ductal Carcinoma, amplification in NTN4 was common. Mutations and deep deletion occurred with an equal frequency. Among Breast Invasive Lobular Carcinoma, amplification and mutation of NTN4 occurred with an equal frequency as well. Amplification of NTN4 occurred in Breast Invasive Mixed Mucinous Carcinoma (Fig. B). All mutations of NTN4 in breast cancer were described in Fig. C. NTN4 harbored one truncating mutation and three missense mutations. Furthermore, relationship of NTN4 gene alteration with breast cancer patient survival was assessed. However, there was no significant relationship of gene alterations in NTN4 with OS, disease specific survival (DSS), disease free survival (DFS) and progress free survival (PFS) of breast cancer patients (Fig. D–G). Potential molecular function and regulation pathway of NTN4 were preliminarily explored to demonstrate the potential mechanism underlying how NTN4 regulates biological behaviors of breast cancer. First, the STRING database was searched for genes that possibly interact with NTN4 (Fig. A). These selected genes were subjected to GO analysis to identify cellular component (CC) (Fig. B), biological process (BP) (Fig. C) and molecular function (MF) (Fig. D) in which NTN4 interacted genes were involved. Based on CC, differentially expressed proteins were extrinsic components of membrane. According to BP, differentially expressed proteins were mainly involved in morphogenesis and motility. Based on MF, differentially expressed proteins functioned mainly for signaling receptor binding. GO and KEGG pathway analysis was performed to identify molecular pathways in which NTN4 interacted genes were involved. The top 12 of enrichment analysis, such as extracellular matrix (ECM)-receptor interaction, adhesion and extracellular part, were presented in Fig. E. Based on data from public databases, NTN4 correlates with breast cancer prognosis and immune infiltration. The NTN4 mRNA expression was significantly lower in invasive breast carcinoma compared with adjacent tissues, while increasing NTN4 mRNA levels are related to favorable prognosis in breast cancer patients. The NTN4 has been proposed as a prognostic marker. In invasive breast carcinoma, NTN4 expression is associated with longer DFS and OS, as an independent prognostic factor affecting survival . In addition, dysregulated NTN4 has been identified as a potential mediator of breast cancer risk . For example, rs61938093 credible causal variants (CCV) is located in the enhancer that interacts with NTN4 promoter. This risk allele might correlate with reduced activity of NTN4 promoter. Knockout of NTN4 in breast epithelium increased cell proliferation in vitro and tumor growth in vivo, suggesting that low expression of NTN4 promoted breast cancer development. In addition, NTN4 is associated with breast cancer cell migration and invasion via regulation of epithelial mesenchymal transition (EMT)-related genes . Another important aspect of this study is that NTN4 correlates with diverse immune infiltration (Fig. ). The NTN4 mRNA levels may reflect infiltration of lymphocytes in breast cancer. In the era of precision medicine , immunological biomarkers are critical for patient subpopulation selection. Immune biomarkers are numerous , and immune checkpoint inhibitors (ICIs) and tumor mutation burden (TMB) hold promise as such biomarkers , . In addition, the frequency of NTN4 gene alteration was low (1.1%), with patterns stratified by molecular subtypes of breast cancer. The NTN4 gene alterations include missense mutation and truncating mutation. However, genetic variation may not affect a patient's survival. Finally, in NTN4 interaction gene cluster analysis, signaling pathways were mostly enriched in cell morphogenesis and motility, which may explain the potential involvement of NTN4 in tumor development and progression of breast cancer. In this study, for the first time, the relationship of NTN4 with the TME in breast cancer was indicated by bioinformatics. In addition, this study confirmed that NTN4 may be used as a prognostic marker of breast cancer by analyzing a series of clinical samples from patients with in breast cancer. As our findings were obtained from public databases, there are some limitations. With the update of databases, the relationship of NTN4 with prognosis can change accordingly. Similarly, the relationship of NTN4 mRNA level with different immune cell types and markers based on sequencing data from public databases can also change. On the other hand, with accumulation of resources, data stratification will become more refined so that reliability of results may increase. However, further experimental verification is required to validate our findings. The current research has explored NTN4 and its prognostic significance in breast cancer. In general, NTN4 is downregulated in breast cancer tissues. Besides, NTN4 is associated with immune infiltration and survival in breast cancer. Collectively, these data suggest that NTN4 is worthy of further investigation in breast cancer, and it may be a potential biomarker, which can be used to predict the prognosis of breast cancer patients. Supplementary Table S1.
Perfusion Capacity as a Predictive Index for Assessing Visual Functional Recovery in Patients With Idiopathic Epiretinal Membrane
143d8cb0-498b-4179-8f6d-1a77032c8a5d
11756611
Surgical Procedures, Operative[mh]
Idiopathic epiretinal membrane (iERM) is a condition characterized by fibrocellular tissue formation at the vitreoretinal interface, which induces retinal architecture deformation, including thickening and foldingd. , Age is a contributing factor to the 2.2% to 28.9% incidence of iERM. The structural alteration, especially in the macula, leads to visual dysfunction, with metamorphopsia and blurred central vision being common symptoms. , Despite its significant impact on visual quality, iERM often presents with a relatively good best-corrected visual acuity (BCVA) in the early stages, masking underlying retinal dysfunction. This makes BCVA an imperfect measure for visual recovery after surgery, particularly in cases with subtle macular changes. Pars plana vitrectomy, an effective method for the treatment of iERM, aims to remove the epiretinal membrane and restore retinal architecture. Although BCVA recovery after surgery is frequently reported, it does not fully capture the extent of visual impairment, particularly when subtle functional deficits remain. Notably, many patients with iERM retain relatively good BCVA despite experiencing blurred vision, metamorphopsia, and difficulty distinguishing objects in low light or low-contrast environments. Microperimetry, which measures retinal sensitivity, although less frequently used in iERM, offers a more sensitive method for detecting early functional changes and has shown improved reliability over traditional visual acuity tests, making it an ideal tool for evaluating macular function in these patients. With the advent of optical coherence tomography angiography (OCTA), a more detailed, depth-resolved imaging modality, there has been growing interest in quantifying retinal blood flow and microvascular changes in diseases like iERM. , OCTA allows differentiation between the superficial vascular complex (SVC) and deep vascular complex (DVC), , and several indexes such as foveal avascular zone parameters, vascular density (VD), fractal dimension, and vessel tortuosity have been explored. However, these measures are limited in their ability to fully capture the dynamic changes in retinal perfusion, particularly in conditions where there is significant structural deformation or vascular remodeling, as seen in iERM. To address this gap, we introduce a new metric, vascular perfusion capacity (PC), which combines both vascular density and perfused area, providing a more comprehensive and dynamic assessment of retinal perfusion. Unlike VD, which may be influenced by macular edema or vascular congestion, PC reflects the actual blood flow capacity, accounting for both vessel dilation and microvascular impairment. Therefore this study aimed to evaluate both preoperative and postoperative visual acuity in patients with iERM, alongside the assessment of retinal vascular parameters using OCTA, which has not been extensively applied in iERM. – Furthermore, we analyzed the correlation between preoperative retinal vascular metrics and postoperative visual outcomes to elucidate the impact of microvascular alterations associated with iERM on visual function. Participants This retrospective study involved 30 eyes of 30 patients with iERM and 28 eyes of 28 healthy volunteers at the Zhongshan Ophthalmic Center from March 2023 to February 2024.The study was approved by the Zhongshan Ophthalmic Center Ethics Committee and adhered to the principles of the Declaration of Helsinki. A senior surgeon (LL) performed all procedures, including 25-gauge pars plana vitrectomy, iERM peeling, and non-foveal-sparing internal limiting membrane peeling. All surgeries were completed without intraoperative complications. Phacoemulsification and intraocular lens implantation were performed on patients over 55 years of age with mild cataracts. The inclusion criteria were as follows: (1) iERM diagnosed by retinal specialists using fundus examination, and OCT; (2)the axial length of all included eyes ranged from 22.00 mm to 25.00 mm; and (3) a minimum follow-up period of three months. Exclusion criteria included secondary ERM associated with other retinal diseases, severe cataract or glaucoma, high myopia (refractive error ≤ −6.00 diopters or axial length > 26.0 mm), retinal vascular diseases, prior vitreoretinal surgery, uveitis, or uncontrolled systemic disease. All participants underwent comprehensive pre- and postoperative evaluations, including BCVA, retinal sensitivity, and OCTA imaging. Sample Size Estimation To confirm the adequacy of our sample size, we conducted a paired-sample power analysis on pre- and postoperative retinal sensitivity and the PC in the SVC within 3 × 3 mm 2 region. Using PASS 2021 software (NCSS, LLC, Kaysville, UT, USA), , designed for power and sample size calculations, we set a significance level of (α = 0.05)and a power of 0.9.For retinal sensitivity, a sample size of 11 pairs achieves 91.6% power to reject the null hypothesis using a two-sided paired t -test. For PC, a sample size of 12 pairs achieves 91.6% power. Consequently, we ultimately enrolled approximately 30 patients in the study to ensure an adequate sample size. Functional Assessment BCVA was measured using a Snellen chart, with values converted to the logMAR for statistical analysis. Retinal sensitivity was assessed with the Nidek MP-3 microperimeter (Nidek Co., Tokyo, Japan), using a 200-ms Goldmann III (25.7-minarc) stimulus. The test used a 4-2-1 automatic staircase strategy, ranging from 0 to 20 dB in 2 dB increments. The fixation target was a 1-mm red ring on a white monochrome background set at 31.4 abs. Retinal sensitivity was calculated as the mean sensitivity across 33 points (covering a 3 × 3-mm 2 area) within the central 10° of the fovea, with a dynamic range of 34 dB. Central Retinal Thickness (RT) Assessment and iERM Stage Mean retinal thickness in the macular area (within a 3-mm diameter) and the paramacular region (within a 6-mm diameter) was measured using the swept-source OCT/OCTA system. To evaluate preoperative and postoperative changes, full central retinal thickness was assessed. The iERM stage was classified according to the staging system proposed by Govetto et al. OCTA Imaging OCTA scans were acquired using a swept-source OCT/OCTA system (VG200; S Vision Imaging, Luoyang, China). This instrument used a central wavelength of 1050 nm (990–1100 nm full width) and an A- scan rate of 200,000/s. The OCTA vascular metrics in this study were derived from the en face OCTA images, rather than OCTA B-scans. OCTA provides direct image, VD and perfusion area (PA) of the retina. VD and PA of the retina, including SVC and DVC, were automatically calculated in the inner 3 mm and 6 mm circles of the Early Treatment of Diabetic Retinopathy Study chart. The SVC layer extended from 5 µm above the internal limiting membrane to the upper third of the retinal ganglion cell complex, whereas the DVC layer spanned from the upper third of the retinal ganglion cell complex to 25 µm below the outer plexiform layer. After automated segmentation, a retinal specialist verified the accuracy of segmentation for all images. VD was calculated as the percentage of blood vessel area in the scanned region, whereas PA represented the perfused area within the region. Image Analysis PC was defined as the ratio of the perfusion area to the total area, adjusted for vascular density, providing a continuous measure of perfusion efficiency ( A). An abstract illustration of blood vessels is provided in B. Statistical Analysis All statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) version 25.0 (SPSS Inc., Chicago, IL, USA). Continuous variables are presented as mean ± standard deviation (SD). To those with non-normal distribution, we applied the Mann-Whitney U test and Wilcoxon signed-rank test. All data followed a normal distribution. Group differences were assessed using independent-sample t -tests, and paired t -tests were used to compare preoperative and postoperative retinal sensitivity, BCVA (logMAR), and anatomical parameters. Pearson correlation analysis (or Spearman's rank correlation coefficient, as appropriate) was performed to examine the relationships between pre-operative anatomical parameters and changes in visual parameters. Multiple linear regression was used to analyze the influence of various factors on post-operative visual function. Statistical significance was set at P < 0.05. This retrospective study involved 30 eyes of 30 patients with iERM and 28 eyes of 28 healthy volunteers at the Zhongshan Ophthalmic Center from March 2023 to February 2024.The study was approved by the Zhongshan Ophthalmic Center Ethics Committee and adhered to the principles of the Declaration of Helsinki. A senior surgeon (LL) performed all procedures, including 25-gauge pars plana vitrectomy, iERM peeling, and non-foveal-sparing internal limiting membrane peeling. All surgeries were completed without intraoperative complications. Phacoemulsification and intraocular lens implantation were performed on patients over 55 years of age with mild cataracts. The inclusion criteria were as follows: (1) iERM diagnosed by retinal specialists using fundus examination, and OCT; (2)the axial length of all included eyes ranged from 22.00 mm to 25.00 mm; and (3) a minimum follow-up period of three months. Exclusion criteria included secondary ERM associated with other retinal diseases, severe cataract or glaucoma, high myopia (refractive error ≤ −6.00 diopters or axial length > 26.0 mm), retinal vascular diseases, prior vitreoretinal surgery, uveitis, or uncontrolled systemic disease. All participants underwent comprehensive pre- and postoperative evaluations, including BCVA, retinal sensitivity, and OCTA imaging. To confirm the adequacy of our sample size, we conducted a paired-sample power analysis on pre- and postoperative retinal sensitivity and the PC in the SVC within 3 × 3 mm 2 region. Using PASS 2021 software (NCSS, LLC, Kaysville, UT, USA), , designed for power and sample size calculations, we set a significance level of (α = 0.05)and a power of 0.9.For retinal sensitivity, a sample size of 11 pairs achieves 91.6% power to reject the null hypothesis using a two-sided paired t -test. For PC, a sample size of 12 pairs achieves 91.6% power. Consequently, we ultimately enrolled approximately 30 patients in the study to ensure an adequate sample size. BCVA was measured using a Snellen chart, with values converted to the logMAR for statistical analysis. Retinal sensitivity was assessed with the Nidek MP-3 microperimeter (Nidek Co., Tokyo, Japan), using a 200-ms Goldmann III (25.7-minarc) stimulus. The test used a 4-2-1 automatic staircase strategy, ranging from 0 to 20 dB in 2 dB increments. The fixation target was a 1-mm red ring on a white monochrome background set at 31.4 abs. Retinal sensitivity was calculated as the mean sensitivity across 33 points (covering a 3 × 3-mm 2 area) within the central 10° of the fovea, with a dynamic range of 34 dB. Mean retinal thickness in the macular area (within a 3-mm diameter) and the paramacular region (within a 6-mm diameter) was measured using the swept-source OCT/OCTA system. To evaluate preoperative and postoperative changes, full central retinal thickness was assessed. The iERM stage was classified according to the staging system proposed by Govetto et al. OCTA scans were acquired using a swept-source OCT/OCTA system (VG200; S Vision Imaging, Luoyang, China). This instrument used a central wavelength of 1050 nm (990–1100 nm full width) and an A- scan rate of 200,000/s. The OCTA vascular metrics in this study were derived from the en face OCTA images, rather than OCTA B-scans. OCTA provides direct image, VD and perfusion area (PA) of the retina. VD and PA of the retina, including SVC and DVC, were automatically calculated in the inner 3 mm and 6 mm circles of the Early Treatment of Diabetic Retinopathy Study chart. The SVC layer extended from 5 µm above the internal limiting membrane to the upper third of the retinal ganglion cell complex, whereas the DVC layer spanned from the upper third of the retinal ganglion cell complex to 25 µm below the outer plexiform layer. After automated segmentation, a retinal specialist verified the accuracy of segmentation for all images. VD was calculated as the percentage of blood vessel area in the scanned region, whereas PA represented the perfused area within the region. PC was defined as the ratio of the perfusion area to the total area, adjusted for vascular density, providing a continuous measure of perfusion efficiency ( A). An abstract illustration of blood vessels is provided in B. All statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) version 25.0 (SPSS Inc., Chicago, IL, USA). Continuous variables are presented as mean ± standard deviation (SD). To those with non-normal distribution, we applied the Mann-Whitney U test and Wilcoxon signed-rank test. All data followed a normal distribution. Group differences were assessed using independent-sample t -tests, and paired t -tests were used to compare preoperative and postoperative retinal sensitivity, BCVA (logMAR), and anatomical parameters. Pearson correlation analysis (or Spearman's rank correlation coefficient, as appropriate) was performed to examine the relationships between pre-operative anatomical parameters and changes in visual parameters. Multiple linear regression was used to analyze the influence of various factors on post-operative visual function. Statistical significance was set at P < 0.05. Baseline Information The baseline demographic and clinical characteristics of the iERM and control groups are summarized in . The iERM group included 30 eyes (13 males, 17 females; mean age, 62.30 ± 9.11 years; axial length, 23.67 ± 0.90mm) with iERM stages distributed as follows: stage 2, 9 eyes; stage 3, 13 eyes; and stage 4, eight eyes. The control group included 28 healthy eyes (12 males, 16 females; mean age, 61.14 ± 4.63 years; axial length, 23.72 ± 0.73 mm). No significant differences were found between the two groups in terms of age ( P = 0.549), gender ( P = 1.000) or axial length ( P = 0.162). Preoperative Comparisons At baseline, iERM eyes exhibited significantly lower retinal sensitivity and BCVA compared to healthy controls, as expected ( P < 0.001 for both measures, ). Preoperative retinal thickness was significantly greater in both the 3 × 3 mm 2 and 6 × 6 mm 2 regions in the iERM group (both P < 0.001), consistent with the macular swelling associated with the condition. Interestingly, while VD and PA in the SVC were significantly elevated in the iERM group (all P < 0.001), PC in the SVC was significantly lower in the 3 × 3 mm 2 region ( P < 0.001).No significant differences were detected in the DVC layer ( P > 0.05). Postoperative Changes Postoperative analysis revealed significant improvements in both BCVA and retinal sensitivity ( P < 0.001 for both, ). The average improvement in BCVA was −0.19 ± 0.21 logMAR, whereas retinal sensitivity increased by 2.85 ± 2.54 dB. Retinal thickness in both the 3 × 3 mm 2 and 6 × 6 mm 2 regions showed significant reductions postoperatively ( P < 0.001 for both regions). After the removal of the iERM, the restoration of retinal perfusion was observed in the SVC layer, where postoperative VD and PA were significantly lower compared to baseline (both P < 0.001), whereas PC in the SVC significantly increased in both the 3 × 3 mm 2 and 6 × 6 mm 2 regions after surgery (both P < 0.001). No significant changes were observed in the DVC layer ( P > 0.05). Correlations Between Preoperative Anatomical Parameters and Visual Function Improvement Baseline factors associated with changes in retinal sensitivity included 3 × 3 mm 2 RT ( r = −0.389, P = 0.034, ) and PC in both the 3 × 3 mm 2 and 6 × 6 mm 2 SVC regions (3 × 3 mm 2 SVC PC: r = 0.71, P < 0.001; 6 × 6 mm 2 SVC PC: r = 0.53, P = 0.003; ). No significant correlations were found between baseline VD or PA and postoperative improvements in retinal sensitivity. Multiple Linear Regression Analysis In multiple linear regression analysis, preoperative PC in the SVC within the 3 × 3 mm 2 region was the strongest predictor of postoperative retinal sensitivity ( r = 0.71, P < 0.001). The regression model showed that postoperative retinal sensitivity was negatively associated with age ( β = −0.53, P = 0.001) and 3 × 3 mm 2 RT ( β = −0.39, P = 0.013), while positively associated with preoperative retinal sensitivity ( β = 1.10, P < 0.001) and PC in the SVC within 3 × 3 mm 2 region ( β = 0.49, P = 0.023; ). Correlations Between Postoperative Anatomical Parameters and Postoperative Visual Outcomes Postoperative BCVA (logMAR) was significantly correlated with postoperative PC in the 6 × 6 mm 2 SVC region ( r = −0.42, P = 0.021; ). The baseline demographic and clinical characteristics of the iERM and control groups are summarized in . The iERM group included 30 eyes (13 males, 17 females; mean age, 62.30 ± 9.11 years; axial length, 23.67 ± 0.90mm) with iERM stages distributed as follows: stage 2, 9 eyes; stage 3, 13 eyes; and stage 4, eight eyes. The control group included 28 healthy eyes (12 males, 16 females; mean age, 61.14 ± 4.63 years; axial length, 23.72 ± 0.73 mm). No significant differences were found between the two groups in terms of age ( P = 0.549), gender ( P = 1.000) or axial length ( P = 0.162). At baseline, iERM eyes exhibited significantly lower retinal sensitivity and BCVA compared to healthy controls, as expected ( P < 0.001 for both measures, ). Preoperative retinal thickness was significantly greater in both the 3 × 3 mm 2 and 6 × 6 mm 2 regions in the iERM group (both P < 0.001), consistent with the macular swelling associated with the condition. Interestingly, while VD and PA in the SVC were significantly elevated in the iERM group (all P < 0.001), PC in the SVC was significantly lower in the 3 × 3 mm 2 region ( P < 0.001).No significant differences were detected in the DVC layer ( P > 0.05). Postoperative analysis revealed significant improvements in both BCVA and retinal sensitivity ( P < 0.001 for both, ). The average improvement in BCVA was −0.19 ± 0.21 logMAR, whereas retinal sensitivity increased by 2.85 ± 2.54 dB. Retinal thickness in both the 3 × 3 mm 2 and 6 × 6 mm 2 regions showed significant reductions postoperatively ( P < 0.001 for both regions). After the removal of the iERM, the restoration of retinal perfusion was observed in the SVC layer, where postoperative VD and PA were significantly lower compared to baseline (both P < 0.001), whereas PC in the SVC significantly increased in both the 3 × 3 mm 2 and 6 × 6 mm 2 regions after surgery (both P < 0.001). No significant changes were observed in the DVC layer ( P > 0.05). Baseline factors associated with changes in retinal sensitivity included 3 × 3 mm 2 RT ( r = −0.389, P = 0.034, ) and PC in both the 3 × 3 mm 2 and 6 × 6 mm 2 SVC regions (3 × 3 mm 2 SVC PC: r = 0.71, P < 0.001; 6 × 6 mm 2 SVC PC: r = 0.53, P = 0.003; ). No significant correlations were found between baseline VD or PA and postoperative improvements in retinal sensitivity. In multiple linear regression analysis, preoperative PC in the SVC within the 3 × 3 mm 2 region was the strongest predictor of postoperative retinal sensitivity ( r = 0.71, P < 0.001). The regression model showed that postoperative retinal sensitivity was negatively associated with age ( β = −0.53, P = 0.001) and 3 × 3 mm 2 RT ( β = −0.39, P = 0.013), while positively associated with preoperative retinal sensitivity ( β = 1.10, P < 0.001) and PC in the SVC within 3 × 3 mm 2 region ( β = 0.49, P = 0.023; ). Postoperative BCVA (logMAR) was significantly correlated with postoperative PC in the 6 × 6 mm 2 SVC region ( r = −0.42, P = 0.021; ). This study demonstrates that PC serves as a valuable and innovative evaluation index for assessing retinal microcirculation in patients with iERM, as well as for predicting postoperative recovery of visual acuity and retinal sensitivity. In contrast to conventional assessment metrics such as VD and PA, PC provides a more nuanced and dynamic evaluation of iERM. Those with higher preoperative PC of the SVC tended to experience greater improvements in retinal sensitivity and achieved higher postoperative sensitivity levels. So, we suspect that when the iERM causes more severe vascular distortion and deformation in the SVC layer, the PC value of the SVC decreases, and the increased vascular resistance in the SVC layer leads to greater congestion and dilation in the DVC layer, which raises the PC value of the DVC. The more the vascular condition deviates from normal, the greater the damage to the retina. Only when the retinal vascular PC is within the normal range does the retinal vasculature maintain a stable state. We suspect that enhancing retinal perfusion preoperatively might improve surgical outcomes. This opens avenues for adjunct therapies that could be administered prior to surgery to optimize the microvascular environment. Traditional OCTA metrics like VD and PA have limitations in accurately reflecting the functional status of retinal vasculature, especially in conditions like iERM where structural distortion is significant. VD can be confounded by factors such as macular edema, vessel compression, and tortuosity, which do not necessarily correlate with perfusion efficiency. , Previous research have presented varying results about the relationships between retinal vascular parameters and visual function in iERM. Bacherini et al. discovered that lower VD in both the SVC and the DVC connected with poorer visual acuity, whereas Yuce et al. showed that higher VD in the DVC correlated with poorer visual acuity. However, other study and our study showed that neither VD nor PA correlated significantly with visual function outcomes, aligning with prior research that highlighted the limitations of these parameters. Moreover, the inconsistency in previous findings regarding changes in VD of the SVC and DVC in iERM eyes, with some studies indicating an increase, decrease, or no significant change, adding to the confusion. , , , Our findings suggest that PC, by integrating both VD and PA into a single metric that reflects the actual perfusion capacity of retinal vessels, provides a more accurate representation of microvascular function. Recent studies have highlighted the importance of considering vessel caliber and flow dynamics rather than just vessel density. For instance, research by Tang et al., emphasized the role of vessel diameter index reflecting the average vessel caliber over mere structural presence in diabetic retinopathy. Similarly, Nishigori et al. identified a new OCTA metric, the variable interscan time analysis, which can detect macular perfusion changes may be associated with predicting the recurrence of macular edema in retinal vein occlusion. These studies suggest that it is essential to investigate the perfusion of blood vessels, supporting our approach of using PC as a more comprehensive metric. Our study further highlights the differential impact of iERM on the SVC and DVC. The SVC, being closer to the vitreoretinal interface, is more directly affected by the mechanical traction of the iERM. This leads to significant alterations in PC within the SVC, which correlates with changes in retinal sensitivity. In contrast, the DVC appears to be less affected structurally but may experience functional changes due to altered perfusion dynamics. Recognizing these layer-specific effects not only enhances our understanding of iERM pathophysiology but also emphasizes the significance for targeted OCTA metrics, like PC, that can differentiate between the impacts on superficial and deep retinal vasculature. The ability to predict postoperative visual outcomes using preoperative PC has significant clinical implications. Surgeons can use PC measurements to identify patients who are more likely to benefit from surgical intervention, thereby optimizing patient selection and managing expectations. Additionally, monitoring PC could help in assessing the efficacy of novel therapeutic approaches, such as pharmacological agents aimed at improving microvascular perfusion. Limitations Although our study provides valuable insights, it is not without limitations. The follow-up period is limited to three months. Future studies with larger cohorts and longer follow-up periods are necessary to confirm the predictive value for long-term prognosis of PC. Moreover, incorporating other functional assessments, such as contrast sensitivity, metamorphopsia, and patient-reported outcome measures, could provide a more comprehensive understanding of visual function recovery. And future studies should further investigate the relationship between iERM stage and perfusion capacity, and how this may affect visual recovery. Although our study provides valuable insights, it is not without limitations. The follow-up period is limited to three months. Future studies with larger cohorts and longer follow-up periods are necessary to confirm the predictive value for long-term prognosis of PC. Moreover, incorporating other functional assessments, such as contrast sensitivity, metamorphopsia, and patient-reported outcome measures, could provide a more comprehensive understanding of visual function recovery. And future studies should further investigate the relationship between iERM stage and perfusion capacity, and how this may affect visual recovery. In conclusion, PC emerges as a novel and robust metric for evaluating retinal microcirculation in idiopathic epiretinal membrane. Our study demonstrates that preoperative PC in the superficial vascular complex is a significant predictor of postoperative retinal sensitivity improvement. This finding has important clinical implications, suggesting that PC can be used to better select candidates for surgery and to tailor personalized treatment plans. Supplement 1
Risk factors for blood transfusion in patients undergoing hysterectomy for stage I endometrial cancer
0a65ae5e-93c5-48bd-8c9e-e0a134597681
11832620
Surgical Procedures, Operative[mh]
Endometrial cancer (EC)is the most common gynecologic malignancy among women in the United States . In a meta-analysis, it was found that 26.5% of endometrial cancer patients have anemia even before their surgical treatment . Anemia of cancer (AOC) is considered an independent risk factor for perioperative complications among cancer patients. Perioperative blood transfusion among gynecologic malignancies patients ranges from 3–77% . A study by Swift et al. on 61 531 gynecologic oncology cases found that ovarian cancer had the highest rate of transfusion at 62% followed by EC at 33% and cervical cancer respectively with 4% . Moreover, it is shown that blood transfusion was associated with poorer surgical and oncologic outcomes. For example, blood transfusion was found to be associated with increased rates of infections and metastasis among colorectal cancer patients . Among EC patients, patients who underwent transfusion had worse 5 year-survival compared to non-transfused patients in a study on 263 endometrial cancer patients above the age of 60 years old. More advanced FIGO stages were associated with a higher likelihood of blood transfusion compared to early-stage EC patients . Several studies showed that blood transfusion is an independent risk factor for perioperative morbidity among gynecologic oncology patients including ovarian, uterine, and cervical cancer patients . Our analysis focuses on stage I EC patients, as it accounts for the majority of EC cases and is mostly treated surgically, making it a large group to investigate transfusion-related outcomes. Surgical management via simple hysterectomy either through an open surgery or a minimally invasive approach is the mainstay treatment for this subset of patients. Our study highlights the risk factors including demographics, perioperative characteristics, and medical conditions contributing to blood transfusion in this cohort. The National Surgical Improvement Quality Program (NSQIP) is a comprehensive multi-institutional database that includes blood transfusion events up to 72 h postoperatively as well as perioperative information on surgical patients. The goal of this study was to identify risk factors that require transfusion following surgery among stage I EC patients. Ethical considerations Institutional review board approval is not required. Our study is a retrospective database cohort study that the American College of Surgeons’ ACS NSQIP database. Participating sites follow strict guidelines to ensure data quality and anonymity of patient information. The NSQIP 2016–2022 participant files were used to identify patients above 18 years old with stage I EC. We identified malignancy patients using the variable “gynecologic cancer case” yes or no, and EC patients were identified via the “Uterine corpus stage” variable. Demographic data, medical comorbidities, and perioperative data collected included the following: age, Body Mass Index (BMI), tobacco use, American Society of Anesthesiologists classification (ASA), race, ethnicity, functional status, bleeding disorders, congestive heart failure (CHF), hypertension (HTN), diabetes mellitus (DM), dialysis-dependent renal failure, prior abdominal surgeries, prior pelvic surgeries, operative approach, operative time, preoperative hematocrit (HCT), and preoperative platelet count. Age was dichotomized into two groups with 60 years as the cutoff point reflecting the risk of EC. Functional status was classified as “dependent” if the health status was recorded as partially or totally dependent otherwise “independent”. Our primary outcome was blood transfusion event that was defined as transfusion of packed Red Blood Cells (RBC)s in the first 72 h postoperatively. Data were imported and analyzed using R Studio (R version 4.3.0; R Core Team (2023); Vienna, Austria). We performed descriptive statistics to describe the overall cohort by blood transfusion outcome. Then, we performed univariate analysis and multi-logistic regression to examine the association between clinical variables of interest and our outcome of blood transfusion. For categorical variables, we used chi-square test and independent t-test for continuous variables. Independent variables which were clinically significant or had a significance level of P ≤ 0.1 in the univariate analysis were included in the multivariable logistic regression model. Using the “glm” function we constructed the multivariable logistic model to predict the occurrence of a blood transfusion (dependent variable) with P values < 0.05 deemed as statistically significant. Institutional review board approval is not required. Our study is a retrospective database cohort study that the American College of Surgeons’ ACS NSQIP database. Participating sites follow strict guidelines to ensure data quality and anonymity of patient information. The NSQIP 2016–2022 participant files were used to identify patients above 18 years old with stage I EC. We identified malignancy patients using the variable “gynecologic cancer case” yes or no, and EC patients were identified via the “Uterine corpus stage” variable. Demographic data, medical comorbidities, and perioperative data collected included the following: age, Body Mass Index (BMI), tobacco use, American Society of Anesthesiologists classification (ASA), race, ethnicity, functional status, bleeding disorders, congestive heart failure (CHF), hypertension (HTN), diabetes mellitus (DM), dialysis-dependent renal failure, prior abdominal surgeries, prior pelvic surgeries, operative approach, operative time, preoperative hematocrit (HCT), and preoperative platelet count. Age was dichotomized into two groups with 60 years as the cutoff point reflecting the risk of EC. Functional status was classified as “dependent” if the health status was recorded as partially or totally dependent otherwise “independent”. Our primary outcome was blood transfusion event that was defined as transfusion of packed Red Blood Cells (RBC)s in the first 72 h postoperatively. Data were imported and analyzed using R Studio (R version 4.3.0; R Core Team (2023); Vienna, Austria). We performed descriptive statistics to describe the overall cohort by blood transfusion outcome. Then, we performed univariate analysis and multi-logistic regression to examine the association between clinical variables of interest and our outcome of blood transfusion. For categorical variables, we used chi-square test and independent t-test for continuous variables. Independent variables which were clinically significant or had a significance level of P ≤ 0.1 in the univariate analysis were included in the multivariable logistic regression model. Using the “glm” function we constructed the multivariable logistic model to predict the occurrence of a blood transfusion (dependent variable) with P values < 0.05 deemed as statistically significant. Between 2016 and 2022 we identified 27,183 patients with Stage I EC who underwent surgical management. Six hundred sixty-eight (2.5%) had received blood transfusion. As shown in Table the transfused EC patients in comparison to the non-transfused were slightly younger (61 years vs.63 years; p = 0.006), had a higher proportion of non-white race (44% vs. 31%; p < 0.001) and more Hispanic ethnicity (8.1% vs. 6%; p = 0.001). HTN, DM, and endometriosis were the top medical comorbidities among the whole cohort at 55.3%,23.4% and 5.6% respectively with higher proportions among the transfused group with for endometriosis (8.2% vs. 5.6%; p = 0.003). Preoperative history showed that 60% of our cohort were ASA class 3–5, 30.1% had prior abdominal surgery, 47.5% pelvic history and 1.1% were functionally dependent with higher proportion among EC patients who received transfusion (3.5% vs. 1.0%; p < 0.001). As for the surgical characteristics, higher percentage among the transfused group (53%) had 180 min or more of reported operative time compared to the non-transfused group (25%, P < 0.001), found to have larger uteri (>/=250 g) (42% vs. 12%; p < 0.001) . In addition, abdominal approach was five times more frequent amount the transfused group (55% vs. 11%; p < 0.001). We also found higher percentage of patients with blood transfusion (37% vs. 1.8%; p < 0.001) had low preoperative HCT level (< 30%). In the univariate analysis (Table ), demographic factors and medical comorbidities like bleeding disorders, congestive heart failure, insulin-dependent DM, endometriosis, renal dialysis, preoperative functional status, and low preoperative HCT level were significantly associated with requiring blood transfusion. In addition, Abdominal hysterectomy as a surgical approach compared to laparoscopy, larger uterine size, and operative time of more than 180 min were significantly associated with increased likelihood of requiring blood transfusion and were included in the multivariate analysis. After adjusting first for demographics, then medical and surgical characteristics, there was a significant association between blood transfusion and the following clinical variables (Table ): history of bleeding disorders (aOR 2.01, 95% CI[1.23, 3.20]; p = 0.004), chronic heart failure (aOR 2.30, 95% CI[1.20, 4.15]; p = 0.008), dependent functional status (aOR 2.59, 95% CI[1.41, 4.47]; p = 0.001), and low preoperative HCT % (aOR 22.3, 95% CI[17.7, 28.2]; p < 0.001). In regard to surgical characteristics and operative time, 180 min or more of operative time (aOR 3.38, 95% CI[2.77, 4.14]; p < 0.001), larger uteri of 250–500 g (aOR 1.93, 95% CI[1.48, 2.49]; p < 0.001) and ≥ 500 g (aOR 2.35, 95% CI[1.77, 3.12]; p < 0.001), and abdominal approach compared to laparoscopic (aOR 6.36,95% CI[4.95, 8.18]; p < 0.001) were associated with an increased likelihood of blood transfusion in our cohort (Figs. and ). In a large cohort of stage I EC patients in the NSQIP database, we found that the overall incidence of blood transfusion is 2.5% in EC patients undergoing surgical management. This rate is comparable to Backes et al.‘s 3% transfusion rate in 503 EC patients undergoing robotic surgical staging but lower than the 7.5% observed by Uccella et al. in 358 EC patients . Additionally, perioperative transfusion rates across gynecological malignancies vary widely, ranging from 3 to 77%, with the upper range reflecting rates observed in ovarian cancer patients undergoing extensive cytoreductive surgeries . Patients who had blood transfusion were found to have a medical history significant for bleeding disorder, CHF, and preoperative HCT < 30%. In addition, patients were more likely to undergo abdominal approach, have longer operative time (> 180 min), and larger uteri (> 250 g). Optimization of preoperative medical conditions such as preoperative anemia via iron supplementation leading up to the surgery could lower the risk of blood transfusion among EC patients in a different study . One systematic review found that about 25 to 75% of patients undergoing major oncologic surgery were found to have preoperative anemia . Preoperative anemia contributes to the increased likelihood of blood transfusion among cancer patients. In our cohort preoperative hematocrit < 30 was significantly associated with blood transfusion. Furtherly, blood transfusion is shown to be associated with increased short- and long-term morbidity and mortality. For example, a study by Prescot et al. found increased postoperative complications following blood transfusions such as pneumonia, sepsis, and mortality among 8906 gynecologic cancer patients . Similar results were obtained by Halabi et al. among colorectal cancer surgery patients . A study by Anic et al. on 152 EC patients showed that blood transfusion was independently associated with decreased 5 years of progression-free survival (PFS) and overall survival rate (OS) . This phenomenon can be explained by various reasons including oxidative stress, inflammation, and immunomodulatory effects. The presence of residual leucocytes, cytokines, and soluble mediators and changes in blood during storage can contribute to transfusion-related immunomodulation that suppresses response to cancer treatments . Similarly, transfusion can trigger oxidative stress that fuels cancer growth and accordingly causes reduced PFS and OS. This effect can be diminished by ensuring anaerobic storage conditions, scavenging toxic compounds, and improving the antioxidant protection of the stored blood . Finally, we hypothesize that transfusion can trigger the activation of Toll-like receptors (TLRs) and the innate immune system. Earlier studies showed TLR activation can trigger chemoresistance and enrichment of Cancer Stem Cells . In addition, blood transfusion has inherent potential complications such as allergic reactions, fever, hemolytic reactions, and bloodborne infections . Given the increased likelihood of blood transfusion and its associated perioperative morbidity and mortality, exploring different risk factors for blood transfusion and blood transfusion protocols has been an area of interest for cancer surgeons. For instance, a study by Ackroyd et al. in ovarian cancer patients showed that a model including age, preoperative HCT, platelets count, abdominal approach, the presence of ascites and/or disseminated cancer, and potential advanced surgical procedures (colectomy or exenteration) with a score of </= 6 was associated with less than 17% risk of blood transfusion . Similar findings citing age, preoperative hemoglobin, and laparotomy were observed among patients undergoing benign gynecologic surgeries . Another study by Swift et al. on patients with ovarian, uterine, and cervical cancers found that patients undergoing laparotomy approach for their surgeries are associated with an increased risk of blood transfusion compared to their counterparts undergoing minimally invasive approaches . Similarly, our findings suggest that patients who underwent a total abdominal hysterectomy compared to laparoscopic approaches were five times more likely to receive blood transfusion. Several studies compared liberal transfusion policies to restrictive policies among gynecologic and non-gynecologic patients. Liberal transfusion policy is defined as blood transfusion with Hb < 10 mg/dl and maintained at 10 to 12 mg/dl compared to Hb < 7 mg/dl and maintained at 7 to 9 mg/dl in the restrictive policy . The findings of the Transfusion Requirement in Critical Care trial by Hébert et al. in 1999 were practice changing after showing restrictive blood transfusion policies were as effective as liberal transfusion protocols . In a study by Boone et al. on 582 gynecologic oncology patients receiving 2276 transfusions under a restrictive blood transfusion policy, protocol-compliant and noncompliant patients showed similar rates of postoperative infections, thrombotic events, and 30-day mortality events . Conflicting results showed that liberal policy was associated with fewer postoperative composite endpoints in surgical oncology patients meanwhile Boone et al. found that restrictive policies among the gynecologic oncology population were without worsening morbidity and mortality rates . Larger uteri often necessitate more extensive dissection and prolonged operative time, both of which could exacerbate blood loss . The association between larger uterine weights and increased transfusion risk may partially stem from benign gynecological conditions such as uterine fibroids, which contribute to uterine enlargement and vascularity, thereby elevating the potential for intraoperative blood loss . Similarly, endometriosis, another common benign condition, can increase surgical complexity through extensive adhesive disease. The need for more adhesiolysis in these cases often results in prolonged operative times and increased tissue trauma, both of which significantly contribute to higher transfusion rates . These findings highlight the importance of preoperative planning and individualized surgical approaches in patients with such conditions to mitigate these risks. However, a limitation of our study is the lack of detailed information on the presence or severity of these conditions in the database. Future research should seek to explore these associations further by integrating detailed clinical data on prevalent benign gynecological conditions, such as uterine fibroids and endometriosis, to better understand their contributions to surgical complexity, intraoperative blood loss, and transfusion risk. Our study acknowledges several limitations. Primarily, it is constrained by the inherent characteristics of the extensive national population-based database utilized. Furthermore, there is minimal to no capacity to ascertain the clinical context surrounding the administration of blood transfusions in the treated individuals. With the involvement of over 90 institutions, there is potential for variability in blood transfusion protocols employed across different entities. The voluntary nature of participation in the database might introduce selection bias, and the patient demographic within the database could deviate from the broader population; however, the potential impact of this is likely diminished due to the substantial number of participating institutions and patient volume. While NSQIP data is rigorously collected by trained surgical clinical reviewers directly from medical records, it is not impervious to abstraction errors which could impose limitations on the study outcomes. In conclusion, this study provides a comprehensive analysis of the risk factors associated with blood transfusion in patients undergoing surgery for Stage I endometrial cancer (EC), the most frequently diagnosed stage of this disease. Over the study period from 2016 to 2022, we identified a relatively low transfusion rate of 2.5%. However, key factors associated with an elevated transfusion risk included prolonged operative time, increased uterine weight, low preoperative hematocrit levels, a history of abdominal surgeries, bleeding disorders, and congestive heart failure. These findings underscore the complexity of perioperative management in this patient population. The strong association between low preoperative hematocrit levels and transfusion risk highlights the critical need for future research on preoperative optimization strategies. Prospective studies exploring interventions such as iron supplementation, erythropoietin therapy, and nutritional optimization could provide valuable insights into reducing transfusion rates. Additionally, the role of advanced surgical techniques, such as robotic-assisted hysterectomy, warrants further investigation. Robotic surgery, with its precision and minimally invasive approach, holds promise for reducing blood loss and transfusion requirements, particularly in patients with larger uteri or surgically complex cases. Interventional trials examining blood conservation strategies, such as intraoperative cell salvage and restrictive transfusion protocols, could also refine perioperative care and improve outcomes. Our findings further indicate that patients requiring transfusions often experience longer hospital stays, underscoring the broader implications of transfusion-related risks on healthcare resource utilization and patient recovery. By emphasizing the importance of preoperative optimization and individualized surgical planning, this study provides actionable insights to enhance patient safety, optimize resource allocation, and reduce healthcare costs. As the prevalence of EC continues to rise, ongoing research into transfusion risk factors and mitigation strategies will be essential. A deeper understanding of these associations will enable clinicians to better predict and manage transfusion needs, ultimately improving perioperative outcomes and quality of care for patients with Stage I EC.
Novel Microbial Diagnostic Methods for Clinical, Environmental, and Food Samples
9b0c90bc-654c-443d-8b91-3ad1d6dd64ae
5554990
Pathology[mh]
Accuracy of maxillary molar distalization with clear aligners in three-dimension: a retrospective study based on CBCT superimposition
7ea0b3f5-f24f-4f36-93e6-5d8d9dde8208
11836081
Dentistry[mh]
Clear aligner treatment (CAT) represents an alternative for patients due to its improved aesthetic value and comfort . Maxillary molar distalization is an effective non-extraction method to correct the mild-to-moderate crowding or minor skeletal discrepancies, acquiring a 2–3 mm arch space to achieve a Class I relationship . CAT has become a new option for this treatment. The efficacy of molar distalization using clear aligners (CAs) is highly commended . Simon et al. reported that CAs can provide a high predictability (88%) of the distalization movement of upper molars . With the help of auxiliary device such as temporary anchorage devices (TADs) and class II elastic to strength the anchorage management, more than 2.25 mm bodily movement of the maxillary first molar could be achieved without remarkable tipping and change of facial height , even can be an alternative choice of extraction therapy. However, according to the clinical observation, the actual distalization movement using CAs is always not as good as expected, and there are side effects always occur. The molar exhibits distal tipping movement during distalization rather than moving bodily, and an increase in the posterior buccal rotation, a higher anterior facial height and clockwise mandibular rotation have been reported more than expected . The previously studies of molar distalization are commonly used digital model or lateral cephalometric radiographs before and after the orthodontics treatment . Since the distortion or overlapping of surrounding structures were unavoidable, the 2-dimension (2D) radiographs could hardly provide precise images of the bone around root apex, making the results inaccuracy . As for the digital intraoral scanning model, it couldn’t reflect the actual moving direction and the displacement of roots, making the movement patterns unclear. Besides, the deviation between the impression and the real structure couldn’t be neglected neither . Cone beam computed tomography (CBCT) is regarded as a kind of imaging examination with high specificity, accuracy and reliability . Based on these considerations, in this study we superimposed the pre-treatment and post-treatment cone beam computed tomography (CBCT) to identify the actual crown and root displacement after the CAT. Understanding the accuracy of tooth movement is essential in choosing treatment plans and evaluating treatment effects. Although there have been some studies on molar distalization , little is known about the true direction and displacement amount of crown and root of all teeth in maxillary disltalization by using CAs. Therefore, this retrospective study aimed to measure the actual moving direction and displacement amount of the root and crown of upper anterior and posterior teeth by superimposition before and after CBCT of maxillary distalization patients treated with CAs, providing more details and evidence for CAT. Participants selection In this retrospective study, among patients who started orthodontic treatment at the orthodontic department of the Stomatological Hospital of Air Force Medical University from January 2019 to December 2023, 28 patients (7 males, 21 females; mean age: 24.3 ± 4.3 years) who underwent bilateral maxillary molar distalization, were treated using clear aligners, and finished all treatment process were included. Inclusion criteria were (1) adult patients, age ≥ 18 years, (2) had high-quality CBCT images before (T0) and after (T1) orthodontic treatment, (4) skeletal Class I or Class II malocclusion, meanwhile Angle Class II malocclusion, (5) the treatment plan design bilateral molar distalization movement ≥ 2 mm, (6) no combined treatment with fixed appliance or other auxiliary appliances (Table ). A non-extraction (except for the third molars) treatment with sequential maxillary molar distalization was performed. The molar distalization patterns design and extra anchorage management device design were exhibited in Fig. . All patients changed aligners every 1–2 weeks following the manufacturer’s protocol. The average number of refinements among all patients was 2.2 times. The protocol for this retrospective study was approved by the Ethics Committee of the Stomatological Hospital of Air Force Medical University (IRB-REV-2022101). CBCT data collection A total of 391 teeth (112 incisors, 56 canines, 112 premolars and 111 molars) crown and root were analyzed using CBCT records from 28 patients, each tooth crown and root were measured twice (T0 and T1). All CBCT images were conducted with the same machine (KaKo Kerr, Orange, CA, USA). The settings included a field of view (FOV) of 23 cm £ 17 cm, an exposure of 37.1 mAs, a scan time of 17.8 s at 120 kV, an axial slice thickness and voxel size of 0.15 mm, and a resolution of 768 £ 768 pixels. A single well-trained expert with abundant experience in CBCT and volumetric measurements made all measurements. Besides, cephalometric radiographs of all patients before treatment (T0) and after (T1) orthodontic treatment were reconstructed from CBCT. Conduct the coordinate system and model optimization The images were imported into Dolphin Imaging software (version 11.95). Firstly, the coordinate system for tooth movement measurement was generated on the basis of the pre-treatment CBCT using the method described by Dai et al. and Gao , whereby the x-axis indicates the buccal-palatal direction, the y-axis the occlusal-gingival direction, and the z-axis the mesial-distal direction (Fig. A). Then 3D models of the craniofacial hard tissue, including maxilla, mandible, and teeth, were reconstructed using the threshold segmentation method. After that, the post-treatment CBCT data were imported and optimized, either (Fig. B). Superimposition of pre- and post-treatment models Next, maxillary registrations of pre-treatment and post-treatment craniofacial models were conducted. The side-by-side superimposition was proceeded firstly, and the maxillary registration region included the bilateral frontomaxillary suture, infra-orbital foramen(Fig. C). Followed by the auto superimposition (Fig. D). It was crucial to transfer the pre-treatment coordinate system to the post-treatment CBCT. Up to now, superimposition of pre-treatment and actual post-treatment maxilla and maxillary dentition based on maxillary registration were then acquired, and this process was performed on the same coordinate system (Fig. E). Measurement The crowns and roots of all teeth in three-dimensional displacements and direction were measured (Fig. F). For displacement measurements, the mesial buccal cusp and the mesial palatal apical of the molars, the buccal cusp and the palatal apical of the premolars, the incisal edge center and the apical point of the incisors, and the crown point and the apical point were labeled on the pre- and post- treatment CBCT, respectively (Fig. ). Finally, the actual tooth movement amount in three dimensions could be calculated. As for the cephalometric radiographs, cephalometric markers were labeled manually and the measurements were done using the Dolphin Imaging software (version 11.95). Statistical analysis Data were analyzed using IBM SPSS Statistics for Windows (version 19.0; IBM, Armonk, NY). Superimpositions of pre- and post- treatment CBCT were repeated twice and achieved 3D displacements, and angular changes were re-measured by a single operator. Bilateral measurements were pooled together to obtain a doubled sample ( n = 56). The intra-operator agreement was evaluated using Pearson correlation coefficients and Bland-Altman analyses. The data distribution was normal; thus, a paired t test was used to compare achieved and predicted changes. Mixed-effect model was used to compare whether the two molar distalization patterns and three anchorage management devices had different influence on distalization accuracy. A P <0.05 was considered statistically significant. The sample size was calculated with α = 0.05 and a power of 88%. The minimum sample size required for this study was 6 teeth for each site. In this retrospective study, among patients who started orthodontic treatment at the orthodontic department of the Stomatological Hospital of Air Force Medical University from January 2019 to December 2023, 28 patients (7 males, 21 females; mean age: 24.3 ± 4.3 years) who underwent bilateral maxillary molar distalization, were treated using clear aligners, and finished all treatment process were included. Inclusion criteria were (1) adult patients, age ≥ 18 years, (2) had high-quality CBCT images before (T0) and after (T1) orthodontic treatment, (4) skeletal Class I or Class II malocclusion, meanwhile Angle Class II malocclusion, (5) the treatment plan design bilateral molar distalization movement ≥ 2 mm, (6) no combined treatment with fixed appliance or other auxiliary appliances (Table ). A non-extraction (except for the third molars) treatment with sequential maxillary molar distalization was performed. The molar distalization patterns design and extra anchorage management device design were exhibited in Fig. . All patients changed aligners every 1–2 weeks following the manufacturer’s protocol. The average number of refinements among all patients was 2.2 times. The protocol for this retrospective study was approved by the Ethics Committee of the Stomatological Hospital of Air Force Medical University (IRB-REV-2022101). A total of 391 teeth (112 incisors, 56 canines, 112 premolars and 111 molars) crown and root were analyzed using CBCT records from 28 patients, each tooth crown and root were measured twice (T0 and T1). All CBCT images were conducted with the same machine (KaKo Kerr, Orange, CA, USA). The settings included a field of view (FOV) of 23 cm £ 17 cm, an exposure of 37.1 mAs, a scan time of 17.8 s at 120 kV, an axial slice thickness and voxel size of 0.15 mm, and a resolution of 768 £ 768 pixels. A single well-trained expert with abundant experience in CBCT and volumetric measurements made all measurements. Besides, cephalometric radiographs of all patients before treatment (T0) and after (T1) orthodontic treatment were reconstructed from CBCT. The images were imported into Dolphin Imaging software (version 11.95). Firstly, the coordinate system for tooth movement measurement was generated on the basis of the pre-treatment CBCT using the method described by Dai et al. and Gao , whereby the x-axis indicates the buccal-palatal direction, the y-axis the occlusal-gingival direction, and the z-axis the mesial-distal direction (Fig. A). Then 3D models of the craniofacial hard tissue, including maxilla, mandible, and teeth, were reconstructed using the threshold segmentation method. After that, the post-treatment CBCT data were imported and optimized, either (Fig. B). Next, maxillary registrations of pre-treatment and post-treatment craniofacial models were conducted. The side-by-side superimposition was proceeded firstly, and the maxillary registration region included the bilateral frontomaxillary suture, infra-orbital foramen(Fig. C). Followed by the auto superimposition (Fig. D). It was crucial to transfer the pre-treatment coordinate system to the post-treatment CBCT. Up to now, superimposition of pre-treatment and actual post-treatment maxilla and maxillary dentition based on maxillary registration were then acquired, and this process was performed on the same coordinate system (Fig. E). The crowns and roots of all teeth in three-dimensional displacements and direction were measured (Fig. F). For displacement measurements, the mesial buccal cusp and the mesial palatal apical of the molars, the buccal cusp and the palatal apical of the premolars, the incisal edge center and the apical point of the incisors, and the crown point and the apical point were labeled on the pre- and post- treatment CBCT, respectively (Fig. ). Finally, the actual tooth movement amount in three dimensions could be calculated. As for the cephalometric radiographs, cephalometric markers were labeled manually and the measurements were done using the Dolphin Imaging software (version 11.95). Data were analyzed using IBM SPSS Statistics for Windows (version 19.0; IBM, Armonk, NY). Superimpositions of pre- and post- treatment CBCT were repeated twice and achieved 3D displacements, and angular changes were re-measured by a single operator. Bilateral measurements were pooled together to obtain a doubled sample ( n = 56). The intra-operator agreement was evaluated using Pearson correlation coefficients and Bland-Altman analyses. The data distribution was normal; thus, a paired t test was used to compare achieved and predicted changes. Mixed-effect model was used to compare whether the two molar distalization patterns and three anchorage management devices had different influence on distalization accuracy. A P <0.05 was considered statistically significant. The sample size was calculated with α = 0.05 and a power of 88%. The minimum sample size required for this study was 6 teeth for each site. Characteristics of study participants A total of 28 participants were included in this retrospective study. All participants were adults, with an average age of 24.3 ± 4.3 years old. As for the gender, the participants consisted of 7 males (25%) and 21 females (75%). The baseline clinical characteristics are listed in Table . By checking the predicted tooth movement in ClinCheck, a total of 391 maxillary teeth (112 incisors, 56 canines, 112 premolars and 111 molars) were included for analyses (One participant lost the upper right 2nd molar before the treatment). Each tooth was measured twice (before and after the orthodontic treatment). The deviation between the pre- and post- treatment lateral cephalometric radiographs The lateral cephalometric radiographs showed that the ANB angle was almost intact after the treatment. The mandibular plane angle was slightly increased but there was no significant difference. The overbite and overjet decreased as expected but the finial overjet was still larger than the normal value (Supplemental Table ). Predicted and achieved three-dimension displacement of maxillary anterior teeth in three-dimension In the labial-palatal dimension, the palatal displacement was designed for the anterior teeth’s crown and root. However, little was achieved ( P <0.0001), and even labial displacement happened to the crown (central incisor: 0.18 ± 1.23 mm, canine: 0.20 ± 1.36 mm). In the mesial-distal dimension, the achieved displacement was less than predicted, except for the crown of central incisor exhibited more distal displacement than prescribed (predicted: 0.47 ± 0.94 mm, achieved: 0.94 ± 2.44 mm). In the occlusal-gingival dimension, more extrusion occurred to all the anterior teeth (Table ). The deviation of the posterior teeth in the buccal-palatal dimension The predicted and achieved displacement of the posterior teeth in the buccal-palatal direction were showed in Table . Despite there were no significant difference between the predicted and the achieved displacement, the actual displacement distance of all teeth was less than what we expected, except for the root of the 1st premolar( P = 0.007). As for the 2nd molar, the palatal torque was preset for the crown in order to avoid the unwanted buccal tipping and a half was finally achieved (Predicted: -0.17 ± 1.07 mm, Achieved: -0.08 ± 1.13 mm) (Fig. A). Interestingly, palatal displacement was designed for the root of the 1st premolar, but buccal displacement occurred (Fig. B). The accuracy based on the predicted and achieved displacement distance was higher in the crown, meaning that CAs was better at moving the crown (Fig. C). The buccal-palatal displacement ratio of crown and root represents its tipping tendency. The greatest buccal tipping movement happened to the 2nd premolar and decreased toward the distal portion of the aligner. The deviation of the posterior teeth in the mesial-distal dimension In the sagittal direction, the distal displacement of the posterior teeth after the CAT was affirmed (Table ). Compared between the predicted and achieved distalization amount, there remained significant difference (all P < 0.001). The posterior arch revealed a progressive increase of the distal displacement distance in the premolars and molars regions, with the greatest distalization distance happened to the bilateral 2nd molars(the crown: 0.73 ± 1.27 mm, the root: 0.97 ± 1.26 mm) (Fig. D). Mesial displacement happened to premolars’ root even distal displacement was prescribed (Fig. E). As for the crown, the accuracy of molar distalization was 5.58%, 10.13%, 19.21%, 31.06% for the 1st premolar, 2nd premolar, 1st molar and 2nd molar, respectively. As for the root, the efficacy of molar distalization was − 19.30%, -34.22%, 31.33%, 37.89%, respectively (Fig. F). That meant although CAT was thought to be effective in the molar distalization, the accuracy was far away below the expectation. Rather than body distalization, CAs could only realize the distal tipping distalization. In the premolar’s region, the crown and root moved in the opposite direction. The root of the premolars exhibited mesial displacement tendency despite a 1.5–2 mm distal displacement had been designed. Fortunately, in the molars region, the crown and root moved in the same distal direction, and the distal displacement distance was more prominent in the second molar’s root than its crown due to the preset distal movement value(the crown: 0.73 ± 1.27 mm, the root: 0.97 ± 1.26 mm). The deviation of the posterior teeth in the occlusal-gingival dimension In the vertical dimension, the CBCT superimposition exhibited that significant differences between the predicted and achieved displacement distance (all P < 0.05). Taking the buccal cusps of the premolars and the mesial buccal cusps of the molars as the landmarks, the extrusion of maxillary posterior teeth was more than predicted (Table ). Specifically, the difference of achieved and predicted vertical movement was 0.32 ± 0.90 mm, 0.36 ± 1.13 mm, 0.40 ± 1.92 mm, 0.53 ± 1.59 for the 1st premolar, the 2nd premolar, the 1st molar and the 2nd molar, respectively, meaning that the greatest extrusion happened to the 2nd molar. The influence factors of molar distalization Mixed-effect model was constructed to explore the influence of molar distalization patterns and extra anchorage management devices on the labial-lingual displacement of the central incisors and mesial-distal displacement of the 1st molars. The results showed that there was no significant difference between one by one movement pattern and two-molars together pattern. TADs and class II elastics are commonly used as extra anchorage management devices. It turned that the central incisors exhibited lingual retraction when adopting TADs despite there was no significant difference. Beyond the expectation, using class II elastics, the central incisors’ crown exhibited more severe labial proclination and the 1st molars’ root exhibited more mesial displacement, both were statistically significant (Fig. and Supplemental Table ). A total of 28 participants were included in this retrospective study. All participants were adults, with an average age of 24.3 ± 4.3 years old. As for the gender, the participants consisted of 7 males (25%) and 21 females (75%). The baseline clinical characteristics are listed in Table . By checking the predicted tooth movement in ClinCheck, a total of 391 maxillary teeth (112 incisors, 56 canines, 112 premolars and 111 molars) were included for analyses (One participant lost the upper right 2nd molar before the treatment). Each tooth was measured twice (before and after the orthodontic treatment). The lateral cephalometric radiographs showed that the ANB angle was almost intact after the treatment. The mandibular plane angle was slightly increased but there was no significant difference. The overbite and overjet decreased as expected but the finial overjet was still larger than the normal value (Supplemental Table ). In the labial-palatal dimension, the palatal displacement was designed for the anterior teeth’s crown and root. However, little was achieved ( P <0.0001), and even labial displacement happened to the crown (central incisor: 0.18 ± 1.23 mm, canine: 0.20 ± 1.36 mm). In the mesial-distal dimension, the achieved displacement was less than predicted, except for the crown of central incisor exhibited more distal displacement than prescribed (predicted: 0.47 ± 0.94 mm, achieved: 0.94 ± 2.44 mm). In the occlusal-gingival dimension, more extrusion occurred to all the anterior teeth (Table ). The predicted and achieved displacement of the posterior teeth in the buccal-palatal direction were showed in Table . Despite there were no significant difference between the predicted and the achieved displacement, the actual displacement distance of all teeth was less than what we expected, except for the root of the 1st premolar( P = 0.007). As for the 2nd molar, the palatal torque was preset for the crown in order to avoid the unwanted buccal tipping and a half was finally achieved (Predicted: -0.17 ± 1.07 mm, Achieved: -0.08 ± 1.13 mm) (Fig. A). Interestingly, palatal displacement was designed for the root of the 1st premolar, but buccal displacement occurred (Fig. B). The accuracy based on the predicted and achieved displacement distance was higher in the crown, meaning that CAs was better at moving the crown (Fig. C). The buccal-palatal displacement ratio of crown and root represents its tipping tendency. The greatest buccal tipping movement happened to the 2nd premolar and decreased toward the distal portion of the aligner. In the sagittal direction, the distal displacement of the posterior teeth after the CAT was affirmed (Table ). Compared between the predicted and achieved distalization amount, there remained significant difference (all P < 0.001). The posterior arch revealed a progressive increase of the distal displacement distance in the premolars and molars regions, with the greatest distalization distance happened to the bilateral 2nd molars(the crown: 0.73 ± 1.27 mm, the root: 0.97 ± 1.26 mm) (Fig. D). Mesial displacement happened to premolars’ root even distal displacement was prescribed (Fig. E). As for the crown, the accuracy of molar distalization was 5.58%, 10.13%, 19.21%, 31.06% for the 1st premolar, 2nd premolar, 1st molar and 2nd molar, respectively. As for the root, the efficacy of molar distalization was − 19.30%, -34.22%, 31.33%, 37.89%, respectively (Fig. F). That meant although CAT was thought to be effective in the molar distalization, the accuracy was far away below the expectation. Rather than body distalization, CAs could only realize the distal tipping distalization. In the premolar’s region, the crown and root moved in the opposite direction. The root of the premolars exhibited mesial displacement tendency despite a 1.5–2 mm distal displacement had been designed. Fortunately, in the molars region, the crown and root moved in the same distal direction, and the distal displacement distance was more prominent in the second molar’s root than its crown due to the preset distal movement value(the crown: 0.73 ± 1.27 mm, the root: 0.97 ± 1.26 mm). In the vertical dimension, the CBCT superimposition exhibited that significant differences between the predicted and achieved displacement distance (all P < 0.05). Taking the buccal cusps of the premolars and the mesial buccal cusps of the molars as the landmarks, the extrusion of maxillary posterior teeth was more than predicted (Table ). Specifically, the difference of achieved and predicted vertical movement was 0.32 ± 0.90 mm, 0.36 ± 1.13 mm, 0.40 ± 1.92 mm, 0.53 ± 1.59 for the 1st premolar, the 2nd premolar, the 1st molar and the 2nd molar, respectively, meaning that the greatest extrusion happened to the 2nd molar. Mixed-effect model was constructed to explore the influence of molar distalization patterns and extra anchorage management devices on the labial-lingual displacement of the central incisors and mesial-distal displacement of the 1st molars. The results showed that there was no significant difference between one by one movement pattern and two-molars together pattern. TADs and class II elastics are commonly used as extra anchorage management devices. It turned that the central incisors exhibited lingual retraction when adopting TADs despite there was no significant difference. Beyond the expectation, using class II elastics, the central incisors’ crown exhibited more severe labial proclination and the 1st molars’ root exhibited more mesial displacement, both were statistically significant (Fig. and Supplemental Table ). CAs are thought to be good at molars distalization movement , the efficacy of CAT is the major concern of the orthodontist . It is reported that a predictability up to 88% of maxillary molar distalization can be achieved using CAs . The actual distalizated of maxillary posterior teeth is important to correct the occlusal relationship, the retraction of anterior teeth is crucial to improve the lateral protrusion profile. However, according to clinical observation, the actual molar distalization distance may not accordance with what we expected. To verify the accuracy of molars distalization using CAs, the actual crown and root of anterior and posterior teeth movement direction and displacement in three dimensions should be obtained. There are some frequently used methods to evaluate the actual orthodontic tooth movement up to now. Cephalometric measurement is a convenient and valid way to assess the tooth movement with low radiation . But the 2-dimension properties limit its usage, for instance, the tooth movement in the transverse dimension cannot be reflected. With the development of digital technology, registration of the digital intraoral scanning models is evolving as the method to evaluate the therapy outcome from a 3-dimension view . However, the deviation between the actual physical structure and the digital model can not be ignored and the scanning accuracy is easily affected by the factors such as scanning technique, environmental conditions, angle and distance between teeth . Moreover, digital intraoral scanning models can only reappear the configuration of tooth crown, the moving direction and distance of tooth root will be lost. CBCT records the physical structure with high specificity, accuracy and reliability . Up to now, superimposition of pre-treatment and post-treatment CBCT data is the most accurate method to evaluate the actual tooth movement amount during the orthodontic treatment. As for the basal bone of the maxilla is stable in adults, the pre-treatment and post-treatment CBCT craniofacial models was therefore used to evaluate maxillary tooth displacement . The key to get a better overlap image is to find a relatively stable region for registration. In the present study, the bilateral frontomaxillary suture, infra-orbital foramen and pogonion were used to conduct the side-by-side superimposition and then followed by the auto superimposition to ensure the consistency between the basal bone of pre-treatment and post-treatment CBCT images. As for the anterior teeth, a slight of labial displacement happened to the incisor although palatal displacement was designed. The lateral cephalometric radiographs confirmed a decrease of overjet, but the finial overjet was still larger than the normal value. Base on this, we take a sceptical attitude on the actual retraction effect of molar distalization on the anterior teeth. The real reason of improvement on lateral profile need further research. As for the posterior teeth, our results suggested that the intra-premolars width increased after the molar distalization. And the increased width come from buccal tipping mostly, possibly because of delayed tooth displacement under rapid molar distalization .Transverse incongruity may affect the anterior-posterior position of the maxilla and mandible, resulting in class II or III malocclusion. Therefore, the transverse matching of the bimaxillary dentition is the foundation of stabilized occlusal relationship. The total length of aligners is getting longer when the molars are distally moved, and the tendency to occupy the space by the buccal tipping of the posterior teeth will appear. That might explain the phenomenon that the root of the 1st premolar was designed to move palatally but the buccal displacement happened. The buccal tipping tendency was most severe in the 2nd premolar and decreased taking the second premolars as the center. Body movement is regard as the most difficult to control with CAs, and the loss of management increases moving toward the distal portion of the aligner, which is similar with the fixed appliance to some extent . With the aligners extended from the center to the distal extremities, lower forces could be released resulting from the increased elasticity and decreased stiffness of aligners, which contributed to the undesired buccal tipping of the posterior teeth . The 2nd molars presented crown palatal torque, possibly because the preset toque design offset the buccal tipping tendency in part. To acquire a better control of the premolars and molars during the distalization, the use of attachments on the teeth is recommended in terms of angulation and inclination movements control . What’s more, overcorrection of the posterior teeth torque is necessary in order to obtain the normal torque after the treatment . The sagittal displacement of the posterior teeth is the key to obtain the desired Angle I occlusal relationship. Our results suggested that the actual distance of molar distalizaton was less than what we designed. The average efficacy of the crown in the sagittal direction was 5.58%, 10.13%, 19.21%, 31.06% for the 1st premolar, 2nd premolar, 1st molar and 2nd molar, respectively. On one hand, the distalization of the terminal teeth is the most predictable and the efficacy decreases from the terminal arch to the central part, which is consistent with the previous study. On the other hand, the efficacy of molar distalization was far away less than what we expected and previously reported , despite the auxiliary device such as micro-implant and class II elastic had been used. Interestingly, the efficacy of distalization was more obvious in the molars’ roots than their crowns. Two reasons might explain this phenomenon. Firstly, in order to avoid the uncontrolled distal tipping tendency, a larger distalization distance had been preset for the root. Secondly, when the 1st molars were moved distally, the anterior anchorage might not enough to fight against the counter-force, so the 2nd molars were tend to move mesially. The counter-force on the crown was smaller than the force when moving the 2nd molars distally, so mesial tipping displacement happened to the 2nd molars. Finally, the roots of the 2nd molars moved farther than the crowns. And the similar routine would happened to the 1st molars later. The CBCT superimposition results that the actual extrusion of the posterior teeth was more than predicted value, the distal displacement and extrusion of molars will increase the vertical dimension and clockwise rotation will happen to the mandible . In addition, buccal tipping displacement in the transverse dimension occurred during the molar distalization, the drop of molars’ palatal cusps was another reason that the deterioration of the anterior facial height. In other words, the management of molars in the transverse dimension is the prerequisite to avoid the undesired changes of anterior facial height. For this consideration, the vertical lateral profile should be paid attention to during the molar distalization process . In this study, we adopted two different molar distalization sequence. Although there was no statistically significance, moving the molars one by one exhibited less difference between the predicted and achieved displacement distance than moving the molars together. CAT has an advantage on deciding tooth movement sequence. One by one pattern is one of the most commonly used molar distalization patterns. When the bilateral 2nd molars moving distally, the other twelve teeth are designed as the anchorage unit. The anchorage value is strong enough to move the 2nd molars without evident proclination of incisors. After the 2nd molars arrive at the target position, the 1st molars start to move. Samoto and Vlaskalic stated that sequential distalization of molars could minimize the anterior anchorage loss and uncontrolled tipping in posterior area by maintaining maximum contact between teeth and aligner . However, Karsli et al. argued that distal tipping and movement were still observed though sequential distalization of posterior teeth was applied using aligners, and more anchorage loss was observed in the anterior region in 33% sequenced distalization group . Further exploration on the distalization patterns is needed. TADs are considered as strong anchorage devices. As previously reported, despite the TADs are used, the anchorage loss cannot be completely avoided . The distal force added to the 1st molars will generate counter-force on the anterior teeth, making them move forward. What’ more, as the length of each CAs is fixed, the aligners can only bring the 2nd molars forward to compensate. That’s why the actual distalization distance of the 2nd molars is less than designed. To our surprise, class II elastics couldn’t improve the accuracy of sagittal displacement during molar distalization, which is in accordance with the literature . On the contrary, the difference between the predicted and achieved displacement was quite enlarged. To enhance the accuracy of tooth displacement in clinical practice, new strategies are being attempted, such as placing a metallic ligature from the TADs to the canines in order to negate the unwanted labial force on the incisors. There are some limitations to this study that need to be taken into account. Larger sample size included various distalization patterns, anchorage strategies and attachments design is expected to elucidate the accuracy of molar distalization in CAT. Besides, despite the pre-treatment and post-treatment CBCT data were superimposed in this study, the number of refinements and the reciprocating movement should be considered simultaneously, which have a significant practical management implications. Superimposition of CBCT before and after the CAT showed that the accuracy of molar distalization in three-dimensions were far less than expected in adults. For the maxillary anterior teeth, the achieved labial displacement was less than predicted. For the maxillary posterior teeth, body distalization could not be fully achieved as predicted. The premolars and molars achieved greater distal tipping, buccal inclination, and less distal displacement than predicted. In the transverse dimension, the greatest buccal tipping tendency happened to the 2nd premolar and decreased toward the distal portion of the aligner. In the sagittal dimension, the highest accuracy of molar distalization was found in the 2nd molar while the lowest in the 1st premolar. The anchorage loss cannot be completely avoided despite extra anchorage management devices are adopted. To sum up, the potential reason for the correction of class II malocclusion patients needs further exploration. Below is the link to the electronic supplementary material. Supplementary Material 1
Acral Changes in pediatric patients during COVID 19 pandemic: Registry report from the COVID 19 response task force of the society of pediatric dermatology (SPD) and pediatric dermatology research alliance (PeDRA)
786ff105-3f26-45a2-af6c-bb26fcda9477
8250200
Pediatrics[mh]
INTRODUCTION Perniosis is characterized by inflammation of small vessels, in association with cold exposure. , , The incidence in children is estimated at 2.5 cases/million children per year though this number comes from small studies. In spring 2020, increased frequency of acral changes resembling pernio emerged as a possible cutaneous manifestation of coronavirus disease 2019 (COVID‐19). , , , Prior reports show male adolescent/ young adult predominance with mild/no preceding viral symptoms and largely negative SARS‐COV‐2 testing. , , When initial cases of COVID‐19 with SARS‐CoV‐2 PCR positivity were reported in Wuhan, China, 2.4% of these cases were children with a mean age of 7 years, of whom 56% were males. In the initial cohort, children generally had mild disease (90%), with a median duration of symptoms of 2 days. , , With the assumption of mild disease in pediatric patients and limited testing availability in early 2020, children were tested less frequently than adults even when they had symptoms or positive contacts. , When COVID‐related skin changes were first noted, acral areas of erythema were described. Acral changes affected younger adults (average age 32.5 years), occurred after other symptoms (59%), persisted for days to weeks (mean 12.7 days), and generally were associated with milder disease (no hospital admission or need for intensive care). In one Spanish study, 41% of pernio cases (29/71) had SARS‐CoV‐2 confirmed. More recent studies of the acral rash, however, mostly show negative nasopharyngeal PCR testing for SARS‐CoV‐2 or were unable to correlate due to lack of any testing in those with acral skin changes. , This pediatric‐specific registry describes a large cohort of children with acral changes. We aimed to discover if pediatric‐specific trends in demographics, clinical features, laboratory findings, and histopathology would lead to better understanding of this phenomenon in relationship to SARS‐CoV‐2. METHODS A pediatric‐specific registry to collect information from healthcare professionals in the United States, Canada, United Kingdom, and Central America (primarily pediatric dermatologists, pediatricians, and pediatric rheumatologists) was established April 12, 2020 by the Pediatric Dermatology SARS‐CoV‐2 Response Task Force, a collaborative effort by members of the Society for Pediatric Dermatology (SPD) and Pediatric Dermatology Research Alliance (PeDRA). The registry is housed by Children's Hospital of Philadelphia REDCap (Research Electronic Data Capture, Vanderbilt University, Nashville, TN). Healthcare providers were notified about the registry via the SPD, PeDRA, and the British Society for Paediatric Dermatology (BSPD). The registry captured demographics, past medical history, family history, clinical findings and course, treatment, viral PCR/antibody testing, histologic evaluation, and other laboratory testing. The data were analyzed using Stat‐16 software (StatCorp, College Station, TX). The registry was granted an exemption from the Children's Hospital of Philadelphia Institutional Review Board after determination that it did not meet the definition of Human Subjects Research. Each submitting site's data entry was regulated and approved or waived by the local institution's IRB. RESULTS The registry compiled 384 individual cases of patients with acral skin changes between April 13, 2020 and July 17, 2020. Six subjects were excluded due to age greater than 18 years, and 378 (age 2 months to 18 years, mean 13 years ± 3.6 years) were evaluated (Table ). Most were male (60.6%) and white/Caucasian (72%) (Table ). Most lived in the United States (69.6%) or Canada (23.3%), with the majority in the United States from California (22.5%), Illinois (11.1%), Wisconsin (6.6%), and New York (2.4%), New Jersey (4.8%) and Pennsylvania (3.8%). 309/378 (81.7%) patients had no comorbidities. Approximately 30% (114/378) had COVID illness symptoms during the 30 days prior to presentation, including fever (12.7%; 48/378) and dry cough (11.6%; 44/378) (Table ). Potential exposure to SARS‐CoV‐2 prior to the acral changes was noted by 33.6% (127/378), most often by those living in a community with high rates (16.7%; 63/378) or contact with a family member exposed through work (7.7%; 29/378). Close contact with a SARS‐CoV‐2‐positive individual was reported by 2.6% (10/378) (Table ). Of 378 subjects, 1.6% had SARS‐COV‐2 infection confirmed by PCR or antibody testing, while 134 (35.4%) had negative testing for the virus by PCR or tested negative for antibodies (8). 47.4% confirmed they had no SARS‐CoV‐2 testing. None were hospitalized or died. Some subjects (~35%) had additional blood testing. Among these subjects, abnormalities were demonstrated in complete blood counts, antinuclear (typically speckled) and anti‐phospholipid antibodies, complement, D‐dimer, fibrinogen, and inflammatory markers (Table ). The lesions lasted an average of 21.6 days and were virtually always on the feet (96.3%). Some subjects also had lesions on the hands (11.9%) or head/neck (11.4%) (Table and Supplemental Table ). Toes were most commonly affected, but changes on the dorsal feet, heels, and periungual area were also reported. The skin changes were largely described as pink or red macules/patches (91.3%), bullae (6.1%), vesicles (11.6%), erosions (14.8%), and ulcers (3.7%). In 5.3% (20/378), desquamation was noted. Thirteen cases had associated histopathology. Among these, the most common changes were a superficial and deep lymphocytic infiltrate with vacuolar change and purpura as well as hemorrhagic parakeratosis in the stratum corneum (Table ). DISCUSSION We present a large collection of children and adolescents with acral skin manifestations that presented in the initial phases of the SARS‐CoV‐2 pandemic. Although most cases were in adolescent males, several cases occurred in infants (youngest just 2 months of age). The age and male predominance noted here are atypical for classical pernio, but similar to reports in primarily adult COVID registries. , We hypothesized a connection between SARS‐Cov‐2 exposure/infection and these changes but found it difficult to confirm causation because of limited testing for the virus/antibodies and the striking proportion of negative results in children who did have testing. Notably, there is growing evidence that SARS‐CoV‐2 PCR initially had high false negative rates (2%‐29%) making early tests not reliable to exclude infection. , Second, there is no reference standard for measuring sensitivity of SARS‐CoV‐2 antibodies in asymptomatic/mild cases making it harder to exclude a connection only on these grounds. Among the 6 patients confirmed to have SARS‐Cov‐2 by PCR or antibodies, most had acral changes weeks (average 22 days) after the initial SARS‐CoV‐2 symptoms or positive test suggesting that the skin inflammation may represent a late manifestation or post‐viral change triggered by a secondary inflammatory response. In these cases, inflammation and a dysregulated immune response resulting from even mild SARS‐CoV‐2 infection might prompt those with environmental insults (cold damp environments) or genetic predisposition to manifest with new skin changes (Figure ). We could then hypothesize that if the skin changes are a result of late inflammation, RT‐PCR testing will be negative in most because testing at the time of the rash is too late to capture the initial infection. Traditional diagnostic and current antibody testing may be missing those with antibodies against the spike protein. Recent data support this hypothesis. In one study, immunohistochemistry for SARS‐CoV/SARS‐CoV‐2 spike protein showed granular positivity in endothelial cells and epithelial cells of eccrine glands in two acral biopsies in patients with minimal or no systemic symptoms. Magro et al found endothelial cell localization of SARS‐CoV‐2 protein in three cases of COVID‐19‐associated perniosis and Colmenero showed SARS‐CoV‐2 RNA in skin biopsy samples from patients who previously had negative nasopharyngeal PCR testing. , Still, there is no agreement. A few argue that pernio is solely the result of greater exposure (such as by being barefoot in unheated homes) during the period of sheltering in place. , We disagree with this because there has been no evidence of unseasonably cold and wet weather that would explain the higher incidence of pernio observed. In a retrospective study of 3.2 million children in Chicago, 8 cases of pernio were identified in a 10‐year period compared to 41 identified in Chicago in the spring of 2020. The Children's Hospital of Philadelphia saw an average of 2.6 cases of pernio a year annually from 2015 to 2019, compared to 17 cases in April‐May 2020. Analysis of weather data from Philadelphia shows that 2020 was not statistically colder (or warmer), nor did it have more precipitation during these months than over the same months during the previous 5 years. Still, many children reported doing schoolwork at desks or tables without socks or shoes, which would be unusual in the school environment. Despite this, there is no evidence that home schooling or bare feet can explain the male adolescent predominance or why infants/toddlers would have increased numbers of cases. , The second wave of increased acral pernio cases many reported in the early fall underscores a direct relationship to the virus rather than a temporal coincidence. Since our first analysis, 56 additional cases were added to the registry in the late summer, fall, and winter and mostly similar trends were observed with slightly higher rates of positive testing. Of these 56 cases, 5 tested positive by PCR for SARS‐CoV‐2 (Supplemental Table ). One subject was hospitalized due to COVID‐19. We also reached out to those who had submitted cases in our first analysis regarding recurrences. Several investigators and clinicians noted recurrences of acral pernio in the fall and winter in subjects previously submitted. In one subject, acral pernio recurred, and the subject tested negative for SARS‐CoV‐2 antibodies after the recurrence. Most subjects were Caucasian, despite ethnically diverse locations. Possible reasons for this include the following: i) difficulty in recognizing subtle erythema in darker skin; and ii) poorer access for skin of color populations to telemedicine during lockdown periods. , This is particularly relevant given increased risk of severe [COVID‐19] disease and MIS‐C (Multi‐Inflammatory Syndrome) in children of color. , Blood testing was performed in 35% of subjects, and some children with few or no other symptoms had laboratory abnormalities. Positive antinuclear antibody (ANA) titers (titer > 1:80) were found in 34/83 (41%), with no other comorbidities, symptoms, or autoimmune family history. Positive ANA can be a marker for acute and chronic infection. , Here, of those positive, many had speckled patterns and one was eventually diagnosed with Sjogren disease (reinforcing the need to consider autoimmune disease in children with pseudo‐chilblains). Other potential evidence of recent/active infection/inflammation (despite negative testing for SARS‐CoV‐2) was elevation in levels of hemoglobin, complement, and interferon gamma. There are reports of adults and children with severe presentations of COVID‐19 with ischemic purpura and sequelae due to coagulation abnormalities. In this registry, few children had laboratory testing and none had histopathologic evidence of clotting abnormalities, consistent with milder disease/good prognosis. Among those with coagulation abnormalities, there were no complications, necrosis, or poor outcomes. Our findings suggest additional blood tests should be considered. Those with positive ANAs should be rechecked to understand the relationship between this marker and SARS‐CoV‐2 infection. Providers may also consider SARS‐CoV‐2 antibody testing in children who have positive ANA testing and acral symptoms but have not undergone prior SARS‐CoV‐2 testing. In summary, our large dataset of children presenting with acral pernio‐like lesions during the COVID‐19 pandemic suggests there could be a direct, not just temporal, relationship with SARS‐CoV‐2. Despite only 1% of the cases having positive PCR testing for SARS‐CoV‐2, our findings provide possible support for an association between pernio‐like changes and SARS‐CoV‐2. Reasons include the large number of cases studied (n = 378), the number (30.2%) with viral symptoms prior to skin changes, and some patients with a known exposure to COVID‐19, as well as many with inflammatory marker elevations. Though an epiphenomenon of the pandemic and a byproduct of quarantine are possible, we believe this is unlikely. Definitive, reproducible confirmation of a direct association between SARS‐CoV‐2 infection and acral pernio‐like changes in large numbers of patients has remained elusive, likely in part due to the availability of and access to reliable and specific viral tests both clinically and histologically. The interim analysis of this database may provide those directly involved in the care of children—clinicians, families/caregivers, health policy makers, and public health officials—with a better sense of prognosis for children who present with pernio‐like lesions, since all recovered without short‐term serious sequelae. Further studies are necessary to explain knowledge gaps, in particular the low rates of positive SARS‐COV‐2 tests and the precise immunopathogenesis of this cutaneous finding relative to infection and immunity. We were limited by selection and confirmation bias. At the time of data collection, SARS‐COV‐2 diagnostic testing was not widespread and even currently available testing may be inadequate. Media exposure to the idea of “COVID toes” likely introduced recruitment bias. Differences in the ability to recognize acral changes in dark skin tones and/or decreased access to care may explain why few Black patients were added to registry. Prospective studies with improved antibody‐based immunoassays, diagnostic lesional PCR, and inflammatory biomarkers are needed. Longitudinal studies would help to determine long‐term sequelae. In the short term, it appears that patients with acral changes had full recovery. Table S1 Click here for additional data file. Table S2 Click here for additional data file. Table S3 Click here for additional data file.
Apexification of an Endodontically Failed Permanent Tooth with an Open Apex: A Case Report with Histologic Findings
90e08eeb-ae98-4d85-8bc8-c44d19bc0076
11857209
Dentistry[mh]
Traumatic injuries to permanent teeth may result in damage to the periodontium, adjacent bone, and the neurovascular supply of the pulp. The outcome of the compromised pulp will be dictated by the natural balance between cellular ingrowth and bacterial infiltration, resulting in either sterile necrosis, infection-induced necrosis, revascularization, or regeneration of the injured pulp . A significant consequence of developing pulp necrosis in a traumatized immature tooth is the cessation of root growth. This occurrence will result in thin, fragile dentinal walls, complicating appropriate debridement and optimal apical sealing with conventional endodontic treatment procedures . The management of such cases is considered to be challenging for the dental professionals, necessitating different approaches. Traditionally, the apexification procedure served as a treatment modality to either induce the formation of an apical barrier or continue the development of an immature apex . For an extended period of time, apexification entails the application of calcium hydroxide (Ca[OH] 2 ) paste to achieve root-end closure, which was subsequently followed by root canal therapy . This long-term therapy presents several disadvantages, such as challenges in patient follow-up, inconsistency in process of apical closure, and compromised tooth structure, which increases the risk of root fracture . Subsequently, mineral trioxide aggregate (MTA), a calcium silicate-based hydrophilic cement, was introduced to the area of endodontics by Torabinejad and colleagues. This material demonstrated biocompatibility, induced odontoblastic development, exhibited antibacterial properties, possessed low solubility, and expanded upon setting; hence, MTA emerged as the preferred material for apexification by facilitating the placement of an artificial apical plug to encourage apical-end closure . Nevertheless, the MTA possesses hydrophilic characteristics that necessitate moisture for the setting process, along with prolonged setting times extended up to 3 h and handling challenges, prompting the exploration of alternate materials . Subsequent members of the calcium silicate-based materials were introduced to address these issues, including Biodentine ™ (Septodont, Saint-Maur-des-Fosses, France), iRoot BP Plus (Innovative BioCeramix, Vancouver, BC, Canada), TotalFill ® BC RRM ™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland), among various other brands. These materials have decreased the setting time to an average of 9–12 min, hence eliminating the two-step obturation procedure . Consequently, such materials were utilized in apexification situations. Regenerative endodontic treatment (RET) is a treatment modality that has been implemented in recent years to address the condition of properly selected cases of immature permanent teeth with necrotic pulp. This treatment aims to revitalize the damaged tissues within the canal space and facilitate the maturation of the root as well as thickening the dentinal walls by hard tissue deposition . RET is founded on a tissue bioengineering paradigm that incorporates four critical components: stem cells, scaffolds, bioactive growth factors, and disinfection, to achieve successful outcomes . Despite the fact that RET was regarded an alternative treatment option for an infected immature tooth, numerous studies demonstrated a lack of consistency in the growth of root lengthening, thickening, and apical closure . Apexification is a well-established treatment that has been shown to have favorable outcomes and consistent results, as evidenced by several clinical studies and case reports. The primary radiographic outcomes seen are the resolution of apical radiolucency, development of an apical barrier, and apical closure . Histological studies of apexification procedures in human and animal models demonstrated the formation of newly mineralized tissue above the apical foramen, defined as either bone-like tissue, cementum-like tissue, or osteodentin tissue . To our knowledge, there is limited histological evidence supporting the apexification treatment of an endodontically failed tooth. The present case describes the successful clinical and histological observations of an apexification procedure for an endodontically failed tooth with an open apex. A 24-year-old Caucasian female patient was referred to the Department of Endodontics at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia, to assess the right maxillary central incisor. The patient’s chief complaint was the presence of mild-to-moderate pain during biting and discoloration on her upper front teeth. The patient had a history of trauma to the anterior maxillary region 10 years ago, during which she underwent root canal treatment at a private clinic. The patient has no history of any systemic disease, and according to the American Society of Anesthesiologists (ASA) classification, she is class ASA I. A clinical examination of the right maxillary central incisor (#11) revealed a defective tooth-colored restoration and mild crown discoloration compared to the adjacent teeth ( A). The pulp testing, which involved applying Endo-Frost (Coltène/Whaledent GmbH+ Co. KG, Langenau, Germany) with a cotton pellet and using an electric pulp tester (Analytic Technology, Redmond, WA, USA), revealed no response. Percussion and palpation recorded mild tenderness and pain; the tooth showed no mobility, and periodontal probing depths were within normal limits. The preoperative periapical radiograph revealed an inadequate root canal filling that was short of the apex, accompanied by defective tooth-colored restoration ( B). The apical region of the root exhibited a short root with a blunderbuss canal and an open apex, along with slight apical radiolucency. Based on clinical and radiographic findings, the endodontic diagnosis revealed a previously treated tooth with symptomatic apical periodontitis. Subsequent to a thorough discussion of the treatment options with the patient, the options presented include: an endodontic approach followed by the placement of a post/core and crown, extraction with or without subsequent replacement, or the option of no treatment. Based on the clinical assessment, the tooth has a favorable prognosis; thus, the indicated treatment option involves an endodontic treatment, succeeded by the placement of a post-core-crown restoration. The endodontic treatment options and procedures were explained to the patient, including non-surgical root canal retreatment with either regenerative endodontic treatment (RET), conventional calcium hydroxide apexification, or one-step apexification. Following consultation with the prosthodontist, regenerative endodontic treatment was excluded due to the necessity of a post in the root canal space to support the ceramic crown; thus, one-step apexification was selected. Informed written consent was obtained from the patient to perform a one-step apexification procedure after engaging in a discussion regarding the treatment of the tooth. There was no ethical conflict. 2.1. First Treatment Visit The patient was anesthetized with 2% lidocaine with 1:100,000 epinephrine (Novocol Pharmaceutical, Cambridge, ON, Canada) using an infiltration technique. Tooth number 11 was isolated under a rubber dam, and the access cavity was re-opened. The gutta-percha was removed with H-files, and the working length was established using an electronic apex locater (Root ZX, J Morita MFQ Corp., Kyoto, Japan), measured at 0.5 mm short of the apex with a K-file #100 and confirmed by a radiograph ( A). The canal walls were not enlarged, and irrigation was conducted with 10 mL 1.5% sodium hypochlorite (NaOCl). A final flush with saline solution was performed, and the canal was dried with sterile paper points. Calcium hydroxide (UltraCal XS, Ultradent Products, Inc., South Jordan, UT, USA) medicament was placed in the root canal, then access cavity was sealed with temporary restorative material Cavit G (3M Deutschland GmbH, Seefeld, Germany) ( B), and the patient was given a second appointment 3 weeks later. All procedures were conducted under an operating microscope (ZEISS microscopy, Jena, Germany). 2.2. Second Treatment Visit At the second visit, the patient was asymptomatic. Tooth number 11 was isolated using a rubber dam after the administration of local anesthetic, and access to the canal had been accomplished. The root canal was thoroughly irrigated with 10 mL of 1.5% NaOCl followed by a final rinse with 5 mL of saline solution, and then dried with sterile paper points. TotalFill ® BC RRM™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland) was introduced into the canal and compacted apically using schilders pluggers (DENTSPLY Caulk, Milford, DE, USA). A periapical radiograph was exposed to confirm adequate placement of the apical plug ( C). The remaining part of the canal was backfilled with injectable thermoplasticized gutta-percha. Then, the access cavity was restored with a Ketac™ Molar-Aplicap glass ionomer (3M Deutschland GmbH, Seefeld, Germany) and light-cured composite Filtek™ Z350 XT (3M Deutschland GmbH, Seefeld, Germany). Subsequently, a final periapical radiograph was conducted ( D). 2.3. Follow-Up Visit Clinical evaluation: The patient was recalled 6 months postoperatively, and after 2 years. The patient was asymptomatic during the follow up visits. Radiographical evaluation: A two-year follow-up periapical radiograph showed the formation of a calcific barrier at the root apex with a normal periapical area in comparison to the preoperative periapical radiograph . The objective assessment of the calcified bridge by radiographic imaging is as follows: Calcified bridge dimension: the radiopaque band observed at the root apex demonstrates a sufficient thickness of approximately 2 mm in width and 3.5 mm in length, extending across the entire width of the canal to ensure an adequate apical closure. Calcified bridge density: the radiopaque band exhibits uniformity, indicating consistent mineralization. Furthermore, the radiopacity is comparable to that of dentin or cementum and is clearly distinguishable from the surrounding radiolucent areas. During the subsequent follow-up visits, I have been informed that the prosthodontic treatment plan has been revised, as the patient in this case preferred not to go for post-core crown restoration, and she preferred to place an implant for long-term survival. Consequently, the treatment option presented to the patient involved the continuation of endodontic therapy in conjunction with orthodontic extrusion to maintain the bone level before the implant placement. The orthodontic extrusion period lasted for 6 months, and the elastic was changed once a week. Subsequently, 3 months of stabilization were needed for the healing processes. Tooth #11 was replaced by a dental implant with a length of 10 mm and a width of 5 mm via a conventional protocol. The prosthetic part was made by using a PFM crown. A post-operative photograph and periapical radiograph of the restored single-tooth implant are shown in . 2.4. Histologic Procedure Permission for histologic examination of the tooth was obtained from the patient. After extraction ( A), the tooth was immediately placed in a 10% neutral buffered formalin solution for fixation. After that, the tooth was decalcified in 7% formic acid until complete decalcification. Then the specimen was rinsed with running tap water for 2 hours, dehydrated with ascending concentrations of alcohol (70%, 90%, and 100%), and embedded in paraffin. After that, longitudinal serial sections were obtained with a microtome set at 4 µm thick in a buccolingual direction, and the specimens were stained with hematoxylin-eosin. Samples were observed under a light microscope to determine the histologic features. 2.5. Histologic Observation The histologic findings showed the formation of mineralized tissue at the root apex ( C). The primary component of this recently developed apical barrier was a continuous layer of dentin-like tissue located adjacent to the apical plug, which was characterized by dentinal tubule structures ( D). Incremental layers of cementum-like tissue, which are most likely acellular cementum tissue, were formed adjacent to the dentin-like tissue ( E). Connective tissue with distinct collagen fibers was observed next to the cementum-like tissue ( F). Also, connective tissue with calcified areas were observed next to the dentin-like tissue ( G). The patient was anesthetized with 2% lidocaine with 1:100,000 epinephrine (Novocol Pharmaceutical, Cambridge, ON, Canada) using an infiltration technique. Tooth number 11 was isolated under a rubber dam, and the access cavity was re-opened. The gutta-percha was removed with H-files, and the working length was established using an electronic apex locater (Root ZX, J Morita MFQ Corp., Kyoto, Japan), measured at 0.5 mm short of the apex with a K-file #100 and confirmed by a radiograph ( A). The canal walls were not enlarged, and irrigation was conducted with 10 mL 1.5% sodium hypochlorite (NaOCl). A final flush with saline solution was performed, and the canal was dried with sterile paper points. Calcium hydroxide (UltraCal XS, Ultradent Products, Inc., South Jordan, UT, USA) medicament was placed in the root canal, then access cavity was sealed with temporary restorative material Cavit G (3M Deutschland GmbH, Seefeld, Germany) ( B), and the patient was given a second appointment 3 weeks later. All procedures were conducted under an operating microscope (ZEISS microscopy, Jena, Germany). At the second visit, the patient was asymptomatic. Tooth number 11 was isolated using a rubber dam after the administration of local anesthetic, and access to the canal had been accomplished. The root canal was thoroughly irrigated with 10 mL of 1.5% NaOCl followed by a final rinse with 5 mL of saline solution, and then dried with sterile paper points. TotalFill ® BC RRM™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland) was introduced into the canal and compacted apically using schilders pluggers (DENTSPLY Caulk, Milford, DE, USA). A periapical radiograph was exposed to confirm adequate placement of the apical plug ( C). The remaining part of the canal was backfilled with injectable thermoplasticized gutta-percha. Then, the access cavity was restored with a Ketac™ Molar-Aplicap glass ionomer (3M Deutschland GmbH, Seefeld, Germany) and light-cured composite Filtek™ Z350 XT (3M Deutschland GmbH, Seefeld, Germany). Subsequently, a final periapical radiograph was conducted ( D). Clinical evaluation: The patient was recalled 6 months postoperatively, and after 2 years. The patient was asymptomatic during the follow up visits. Radiographical evaluation: A two-year follow-up periapical radiograph showed the formation of a calcific barrier at the root apex with a normal periapical area in comparison to the preoperative periapical radiograph . The objective assessment of the calcified bridge by radiographic imaging is as follows: Calcified bridge dimension: the radiopaque band observed at the root apex demonstrates a sufficient thickness of approximately 2 mm in width and 3.5 mm in length, extending across the entire width of the canal to ensure an adequate apical closure. Calcified bridge density: the radiopaque band exhibits uniformity, indicating consistent mineralization. Furthermore, the radiopacity is comparable to that of dentin or cementum and is clearly distinguishable from the surrounding radiolucent areas. During the subsequent follow-up visits, I have been informed that the prosthodontic treatment plan has been revised, as the patient in this case preferred not to go for post-core crown restoration, and she preferred to place an implant for long-term survival. Consequently, the treatment option presented to the patient involved the continuation of endodontic therapy in conjunction with orthodontic extrusion to maintain the bone level before the implant placement. The orthodontic extrusion period lasted for 6 months, and the elastic was changed once a week. Subsequently, 3 months of stabilization were needed for the healing processes. Tooth #11 was replaced by a dental implant with a length of 10 mm and a width of 5 mm via a conventional protocol. The prosthetic part was made by using a PFM crown. A post-operative photograph and periapical radiograph of the restored single-tooth implant are shown in . Permission for histologic examination of the tooth was obtained from the patient. After extraction ( A), the tooth was immediately placed in a 10% neutral buffered formalin solution for fixation. After that, the tooth was decalcified in 7% formic acid until complete decalcification. Then the specimen was rinsed with running tap water for 2 hours, dehydrated with ascending concentrations of alcohol (70%, 90%, and 100%), and embedded in paraffin. After that, longitudinal serial sections were obtained with a microtome set at 4 µm thick in a buccolingual direction, and the specimens were stained with hematoxylin-eosin. Samples were observed under a light microscope to determine the histologic features. The histologic findings showed the formation of mineralized tissue at the root apex ( C). The primary component of this recently developed apical barrier was a continuous layer of dentin-like tissue located adjacent to the apical plug, which was characterized by dentinal tubule structures ( D). Incremental layers of cementum-like tissue, which are most likely acellular cementum tissue, were formed adjacent to the dentin-like tissue ( E). Connective tissue with distinct collagen fibers was observed next to the cementum-like tissue ( F). Also, connective tissue with calcified areas were observed next to the dentin-like tissue ( G). This case report is presented in which mineralized apical tissue formation occurred in an endodontically failed maxillary central incisor with an open apex after the apexification procedure. The techniques used for managing the open apex in necrotic teeth, with or without apical periodontitis, went through many treatment phases, including conventional Ca(OH) 2 apexification, artificial apical plug apexification, and regenerative endodontic treatment, each exhibiting various advantages as well as drawbacks. Conventional apexification using Ca(OH) 2 has demonstrated reliable outcomes; however, several drawbacks have been noted, including the extended duration of treatment and the requirement for periodic replacement of the intracanal dressing, necessitating multiple visits and patient compliance. Additionally, there is an elevated risk of root fracture due to the prolonged presence of Ca(OH) 2 within the root canal, as well as an increased likelihood of recontamination of the root canal system due to failures in the temporary seal . To address these limitations, the artificial apical plug approach, referred to as one-step apexification, has been developed for managing such conditions. Nonetheless, this approach lacks the capacity to promote the thickening of canal walls and/or continued root growth . The RET approach, unlike to the apexification procedure, promotes the growth of immature roots, involving root thickness and lengthening, apical closure, and potential regeneration of tooth vitality . RET entails specific clinical considerations that must be adhered to in order to select the appropriate case. It is essential to consider patient and parental compliance, particularly given that the majority of cases involve young patients. Furthermore, it should be noted that the tooth does not necessitate the placement of a post or core within the pulp space, and the patient does not exhibit any allergies to the medications and antibiotics utilized in this procedure. While RET has indicated encouraging results, various limitations and adverse outcomes have been identified. This encompasses an extended treatment duration, numerous appointments for disinfection, variable histological results, possibility of crown discoloration, and the potential for treatment failure . In this particular case, RET was excluded since the tooth was being designated for a post and core procedure. Consequently, one-step apexification has been selected as a treatment option. The success of the apexification procedure depends on the deposition of the calcified barrier, which is controlled by the differentiation of the stem cells from the apical papilla (SCAP) that migrate from the healing periradicular tissues . The molecular foundation of the apexification healing process involves various growth factors, cytokines, transcription factors, and bone morphogenetic proteins (BMPs) that facilitate the differentiation of SCAP into dentin-like, cementum-like, bone-like tissues, and/or organic matrix via specific signaling pathways . The SCAP, derived from neural crest mesenchymal stem cells, are a distinct population with significant proliferative capacity, capable of self-renewal and exhibiting minimal immunogenicity . Furthermore, the SCAP are capable of remaining viable in an infected immature permanent tooth with apical periodontitis, hence they are regarded as an essential biological source for the formation of the pulp-dentin complex and the continuing process of root development . Prior histological studies indicated a variable response of apical tissue to the apexification procedure. An animal study conducted by Ham et al. demonstrated periapical healing and the formation of new calcified tissue, recognized as bone-like, cementum-like tissue, or osteodentin, at root apex of the infected, immature teeth . An additional animal study by Palma et al. indicated that the developed apical barrier predominantly included cellular cementum encircled by periodontal ligament in most teeth treated with MTA apexification . Yang et al. showed that the formed calcified barrier composed of immature hard tissue, connective tissue, and bone developed following calcium hydroxide apexification treatment of an immature human premolar tooth . In this study, the histologic evaluation revealed the formation of an apical calcified barrier formed at the root apex, which was primarily composed of dentin-like tissue and cementum-like tissue. The dentin-like tissue located adjacent to the apical plug, distinguished by the presence of dentinal tubule structures. Subsequently, the incremental layers of cementum-like tissue were identified, possibly representing acellular cementum tissue. Furthermore, regions of connective tissue exhibiting distinct collagen fibers were noted, along with connective tissue containing calcified patches. We are unable to correlate our findings with the published data, which exhibit considerable variability in the type of newly formed tissue, likely attributable to differing study standards; some employed animal jaw models while others examined human teeth, alongside variations in treatment provided prior to histological assessment. Additionally, to the best of the author’s knowledge, this is the first histological study of an endodontically failed tooth that underwent successful apexification treatment. The objective assessment of the calcified bridge enables clinicians to ascertain the effectiveness of the formed bridge in sealing the apex and supporting periapical healing. The specific characteristics of the calcified bridge, including size, dimension, and density, can be assessed using radiographic imaging techniques such as periapical radiography or cone beam computed tomography. The radiograph in this investigation indicated a radiopaque structure at the apex of the root canal, consistent with a mineralized barrier. The calcified bridge exhibits adequate dimensions, measuring approximately 2 mm in width and 3.5 mm in length. The density and radiographic characteristics indicate sufficient mineralization and closure of the apical foramen. These findings are consistent with previous studies reporting the formation of calcified barriers during apexification procedures . Numerous biological factors that contribute to the failure of endodontic treatment have been identified. Nevertheless, the most prominent cause of failure is the persistence or regrowth of intraradicular infection . The disinfection of root canal system in endodontically failed teeth is of great concern and may provide obstacles when managing an infected immature tooth with thin dentin walls when compared to their matured counterparts . Evidence indicated that the use of Ca(OH) 2 medicament in MTA apexification treatments considerably promoted periodontal tissue repair and regeneration. The majority of reported cases of apexification procedures, including the current report, were conducted through two clinical sessions, during which Ca(OH) 2 was applied as an intracanal medicament . The selection of material for the apical plug has a significant impact in the apexification outcomes. It must exhibit superior biocompatibility, facilitate stem cell migration and differentiation, possess antimicrobial properties, remain insoluble, be user-friendly, and not induce discoloration . In addition to Ca(OH) 2 and MTA materials, contemporary literature supports the use of calcium silicate bioceramic materials for apical barrier formation . Interestingly, long-term prognostic studies demonstrated that apexification had high survival rates, irrespective of the type of bioactive material employed. High survival rates of Ca(OH) 2 apexification have been reported to reach 86%, with an average follow-up duration of five years . A recent long-term survival study of an immature traumatized incisors, indicated a median survival rate of 10 years for Ca(OH) 2 apexification and 16 years with MTA apexification . A retrospective study with an average follow-up duration of 3.3 years revealed that 86.3% of teeth treated with Biodentine ™ as an apical plug exhibited complete healing or shown symptoms of healing . A critical consideration in the treatment of teeth with wide-open apices is the avoidance of periapical extrusion of the apical plug filling material into the periradicular tissue. The excessive filling or extension of the apical filling material has been demonstrated in prior histological investigations to correlate with significant inflammatory cell infiltration and the lack of apical barrier tissue development . This inflammatory process is thought to have impeded the repair of periodontal tissue, hence interfering with the formation of the hard tissue barrier. It has been recommended to employ a matrix at the periapex in wide-open apices to control the compaction of MTA material and prevent its extrusion. A variety of biocompatible materials have been documented in the literature for this purpose, including dentin chips, bovine bone xenografts, calcium phosphate, oxidized cellulose, and platelet-rich fibrin . In the current study, we used a calcium silicate bioceramic material (TotalFill ® BC RRM™ Putty) as an apical plug, which is a pre-mixed condensable putty that allows for controlled administration without the necessity of an apical matrix. An interdisciplinary approach, along with accurate diagnostics, is essential for achieving improved, conservative, and predictable outcomes in aesthetic areas. The endodontist performs a crucial role in advising patients regarding the decision-making process between tooth preservation and extraction. This encompasses a discussion of the advantages, risks, and long-term consequences related to each of the options . In regard to the present case, endodontic therapy, succeeded by post-core-crown restoration, was identified as the preferred treatment modality. Nonetheless, in accordance with the patient’s preferences, the treatment plan was amended to accommodate extraction followed by implant replacement. Orthodontic extrusion is being implemented as a treatment modality that enhance both hard and soft tissue aspects prior to the implantation of dental implants . The patient was satisfied with the color, morphology, and margins of the cemented restoration. The present case demonstrates the clinical and radiographical success of an endodontically failed permanent incisor with an open apex after an apexification procedure. A two-year follow-up visit revealed the absence of signs and symptoms and hard tissue formation at the root apex. The histological evaluation of the newly formed mineralized tissue at the root apex revealed the formation of a continuous layer of dentin-like tissue with an identifiable dentinal tubule structure and the formation of an incremental layers of cementum-like tissue. In addition, connective tissue with distinct collagen fibers and connective tissue with calcified areas were noted.
A 4‐year follow‐up of root canal obturation using a calcium silicate‐based sealer and a zinc oxide‐eugenol sealer: A randomized clinical trial
f02a2011-09fe-4304-8fca-1fef35ff293c
11715139
Dentistry[mh]
Apical periodontitis (AP) is a chronic disease in which endodontic infection induces an inflammatory reaction within the periapical tissues, resulting in bone resorption and lesion formation (Márton & Kiss, ; Nair, ; Ricucci & Siqueira, ). A thorough root canal treatment (RCT) can prevent or treat AP (Ng et al., ). According to the literature, the estimated weighted success rates of primary and secondary RCTs range between 68%–85% and 70%–86%, respectively. However, when loose criteria are applied, the success rates increase to 97% (Ng et al., , Ng, Mann, & Gulabivala, , Ng, Mann, Rahbaran, et al., , Ng et al., ). The quality of root canal filling is a significant prognostic factor influencing the success of nonsurgical RCTs (Ng, Mann, Rahbaran, et al., ). According to the European Society of Endodontology quality guidelines for endodontic treatments (Löst, ), approximately 40%–60% of the failures are related to inadequate obturation of the root canal system. State‐of‐the‐art endodontic obturation should seal the entire root canal system, preventing microorganisms and fluids from passing through the canal to the apical tissues (AAE, ; Buchanan, ; European Society of Endodontology, ). Various filling techniques have been proposed over the years (de Chevigny et al., ; Farzaneh, Abitbol, Lawrence, & Friedman, ; Orstavik et al., ), and recently, a variety of calcium silicate‐based sealers (CSBSs) have been introduced to the market (Camps et al., ; Gomes‐Filho et al., ; Lee et al., ; Torabinejad et al., ; Zhang et al., ; Zhou et al., ). CSBSs exhibit hydraulic properties, allowing them to set and seal in the presence of moisture (Camilleri et al., ). They also demonstrate bioactivity; when in contact with tissue fluids, CSBSs release calcium ions and produce calcium hydroxide and apatite on their surfaces, potentially creating an interfacial layer between the sealer and dentinal walls (Donnermeyer et al., ; Salles et al., ; Sfeir et al., ; Wang, ). Additionally, a decreased inflammatory response has been observed in the bone in the presence of these products (Assmann et al., ; Wang et al., ; Zhang et al., ). One example of CSBSs is the BioRoot™ RCS (Septodont, Saint‐Maur‐de‐Fossés, France), a powder/liquid tricalcium silicate‐based sealer introduced in 2015. It is recommended for use with the single‐cone (SC) or cold lateral compaction root filling. In the last decade, these innovative sealers have been extensively evaluated by comparing their properties to those of zinc oxide‐eugenol (ZOE)‐based and epoxy resin‐based sealers in numerous in vitro studies (Alves Silva et al., ; Bardini et al., ; Donnermeyer et al., ; Drukteinis et al., ; Gaudin et al., ; Seo et al., ). However, they have seldom been tested in clinical trials (Bardini et al., ; Chybowski et al., ; Hu et al., ; Kim et al., ; Pontoriero et al., ), and there is a paucity of information from randomized controlled prospective clinical trials at medium‐ or long‐term follow‐up (Kim et al., ). A randomized clinical trial, part of a modular project, has reported the results at 1‐year follow‐up of teeth instrumented with a standardized protocol and obturated using either the SC technique with gutta‐percha (GP) and BioRoot™ RCS or warm vertical compaction (WVC) of GP and Pulp Canal Sealer™ EWT (Kerr© Corporation, Orange, CA), showing that both groups had similar good clinical performance (Bardini et al., ). The current study represents the second and third part of the project, aiming to evaluate the clinical outcome of teeth obturated with the two techniques and sealers, at 2‐ and 4‐year follow‐up, with the null hypothesis stating that there is no difference in the medium and long‐term success rate in teeth obturated with the SC technique and a CSBS, or with the WVC technique and a ZOE sealer. Study design This study was designed as a prospective, single‐centre, randomized controlled clinical trial to compare the quality of root canal obturation and its short‐, medium‐, and long‐term clinical outcomes in the general patient population. The trial was conducted in compliance with the principles of the Declaration of Helsinki and Good Clinical Practice after receiving approval from the Ethics Committee (PROT. PG/2017/16759, Ca, November 2017) and registered at ClinicalTrials.gov (Identifier: NCT04249206). This randomized clinical trial has been written according to Preferred Reporting Items for Randomized Trials in Endodontics (PRIRATE) 2020 guidelines (Nagendrababu et al., ) (Figure ). Sample size As the primary outcome, a mean difference between groups of 0.50 units (standard deviation 1.0) using the change in periapical index (PAI) score and a statistical power of 0.8 with a significance level of 0.05 was set. According to Pandis (Pandis, ), a minimum sample size of 41 dependent teeth with 26 participants per group was required. To account for potential dropouts throughout the study, we planned to recruit 10% more participants, resulting in a target of 46 dependent teeth and 28 patients per group. Patient selection Patients who met the following inclusion criteria were enrolled from the outpatient clinic of the Department of Conservative Dentistry and Endodontics at the University Hospital between 1 May 2016 and 31 December 2017: aged between 18 and 80 years, in good health (American Society of Anesthesiologists classification I or II) (De Cassai et al., ), and with at least one permanent single‐ or multi‐rooted mature tooth with signs and/or symptoms indicating the need for endodontic treatment (primary or secondary) according to the ESE guidelines (Löst, ). The exclusion criteria were as follows: patients who did not agree to undergo RCT or participate in the study, had unrestorable teeth or teeth with poor prognosis (cracks, suspected fractures, iatrogenic perforations or resorptions, and moderate to severe periodontitis), and had teeth requiring retreatment that displayed a poor prognosis due to a visibly altered root canal morphology (Gorni & Gagliani, ). Inception cohort and randomisation The following clinical and medical data were recorded before treatment: history of pain and responses to sensitivity tests, palpation, percussion, and periodontal probing (Berman & Hargreaves, ). At baseline, one or more periapical radiographs of Kodak ultraspeed dental film, size 31 × 41 mm (Carestream Health©, Stuttgart, Germany) and X‐safe 70 70 KV/8 mA (Cefla Medical Equipment, Imola, Italy) of the involved teeth were obtained and evaluated to assess the crown, root, and periapical status. Written informed consent to undergo treatment, follow‐up, and participate in the study was obtained from all patients prior to study enrolment. A total of 56 patients with 92 single‐ or multi‐rooted teeth fulfilled the inclusion criteria (Figure ). All treatments were performed by four endodontic residents, divided into two groups depending on the day of the week on which they rotated in the clinics. The first patient on the list was randomly assigned by the clinical supervisor to either the SC with CSBS or the WVC with ZOE sealer, based on the outcome of flipping a coin. The second patient was assigned to the alternative technique, and the alternation continued until the end of the day. Random allocation ended when 46 teeth were assigned. Patients enrolled in the study were unaware of the treatment group to which they belonged. Dental treatment RCTs were performed using a standardized protocol that varied only in terms of the technique and sealer used for canal obturation. Instrumentation and disinfection protocol After local anaesthesia and rubber dam isolation, an access cavity was created, and the working length was assessed using an apex locator (DentalPort ZX, J. Morita MFG. CORP©, Kyoto, Japan), which was confirmed with one or more periapical radiographs. All the relevant radiographs (including preoperative, master apical file at working length, post‐obturation, and follow‐up periapical radiographs) were taken with consistent angulation ensured by the intuitive orientation of a beam‐aiming device (Rinn; Dentsply Sirona, Ballaigues, Switzerland) (Ng et al., ). Primary RCTs were performed using NiTi ProTaper Next™ rotary instruments (Dentsply Sirona, Ballaigues, Switzerland) at 300 RMP and 4 N/cm in a crown‐down approach. Each canal was prepared to at least an X2 master apical rotary file. In secondary RCTs, the GP and sealer were removed manually using Gates‐Glidden drills and 0.1 mL of solvent (Endosolv® E, Septodont, Saint‐Maur‐des‐Fossés, France). The canals were then manually renegotiated using K‐files (Kerr© Corporation, Orange, California). Throughout the instrumentation, root canals were continuously irrigated with 5.25% sodium hypochlorite (NaOCl) using a 31‐gauge needle positioned 2 mm shorter than the working length (Niclor 5‐Dentale, Ogna Lab, Muggiò, Italy). Once mechanical instrumentation was completed, each canal was irrigated with 5 mL of 5.25% NaOCl, followed by final irrigation with 5 mL of saline solution. Root canal obturation methods Following instrumentation, the canals were dried with sterile paper points. A standardized GP master cone, snugly fitting to the working length, was selected, and the canals were obturated as follows (Figures and ): BIO group : BioRoot™ RCS was prepared according to the manufacturer's instructions. Obturation was completed by placing the GP master cone, previously coated with sealer, into the canal and removing excess GP with a heated instrument. PCS group : Pulp Canal Sealer™ EWT was prepared according to the manufacturer's instructions, and obturation was performed according to the continuous wave technique (Buchanan, ). The GP master cone coated with the sealer was placed into the canal, and a pre‐measured heated plugger (SuperEndo Alpha 2; B & L Biotech, Ansan, Korea) was inserted to cut and compact the master cone (Buchanan, ). Backfilling was performed through the thermoplastic injection of GP using SuperEndo Beta 2 (B & L Biotech). A periapical radiograph was obtained to assess the quality of root canal filling. All teeth were restored coronally using composite resins and dental adhesives (IPS Empress Direct; Ivoclar Vivadent, Schaan, Liechtenstein, Germany). Any teeth requiring permanent full cuspal coverage were restored by the referring practitioner within 1 month of completing the RCT. Follow‐up assessment Clinical examinations included assessing the presence or absence of pain, swelling, sinus tract, and abscess formation. Additionally, the functionality of each tooth was evaluated. Clinical and radiographic follow‐ups were performed at 1, 3, 6, 12, 24, and 48 months for each tooth. Data were recorded in a dedicated chart and updated at every follow‐up visit. All radiographs were digitally scanned, saved in JPEG format, and imported into ImageJ software version 1.41 (National Institute of Health, Bethesda, MD, USA). Turbo Reg (Biomedical Imaging Group, Lausanne, Switzerland) was utilized to reduce the distortion factors in the radiographs (Bose et al., ). Preoperative and recall radiographs of the teeth were assigned PAI scores (Orstavik et al., ) by two blinded, trained, and calibrated examiners (Table ) (Landis & Koch, ). Any disagreements were resolved by retaining the highest possible score. For multi‐rooted teeth, the root with the highest score served as the reference. The same examiners assessed the quality of root canal obturation according to the criteria described by Ng et al. , Ng, Mann, Rahbaran, et al.  (Table ). Quality of root canal obturation The quality of the root canal obturation was evaluated based on criteria such as length, voids, and sealer extrusion (Table ). Sealer extrusion and root‐filling voids were classified as present or absent. For multi‐rooted teeth, if sealer extrusion or root‐filling voids were detected in at least one root, the teeth were categorized as having extrusions or voids. The root filling length was recorded as ‘adequate’, ‘short’, or ‘long’ (Siqueira et al., ; Sjögren et al., ). Healing After assigning a PAI score, each tooth was categorized based on clinical and radiographic assessments into the following outcome categories (Figure ): Healed: functional and asymptomatic without any sign of AP (PAI = 1). Healing: functional and asymptomatic with periapical lesions that have decreased in size (PAI >1). Diseased: non‐functional and symptomatic teeth with signs of AP (PAI >1) or asymptomatic teeth with increased periapical lesions. Functional teeth were defined as teeth without symptoms, whether with newly emerged or persisting, or without AP (Friedman & Mor, ). According to the loose criteria, both healed and healing cases were classified as successful. Strict criteria considered only healed cases as successful (Tables and ) (Ng, Mann, & Gulabivala, ). The entire tooth was assessed as a unit. If a tooth was extracted due to endodontic problems such as persistent pain, swelling, sinus tract, or periapical radiolucent lesions, the treatment outcome was considered a failure (Table ). Outcome variables Healing was designated as the primary outcome of the study. Several secondary outcomes were evaluated, including extraction rates, length of filling, presence of voids, and sealer extrusion rate. For the subgroups BIOAP and PCSAP, the measurement of PAI score at different time points (T0: baseline, T1: 1 month, T2: 3 months, T3: 6 months, T4: 1 year, T5: 2 years, T6: 4 years), and changes in PAI values compared to baseline (T1‐T0, T2‐T0, T3‐T0, T4‐T0, T5‐T0, and T6‐T0) were considered as tertiary outcomes. Statistical analysis The statistical analysis included descriptive statistics for categorical variables (absolute and relative frequencies) and continuous variables (mean, standard deviation, range, median, and quartiles) for the total sample and by group differentiation. Binary outcomes were compared between groups using multi‐level simple binary logistic regression with generalized estimation equations (GEE). Raw odds ratio (OR) and 95% confidence intervals were obtained from Wald's Chi 2 statistic. Quantitative outcomes were analysed by linear regression models and estimated with GEE to account for within‐subject dependence of teeth. Beta coefficients and 95% confidence intervals were reported for these analyses. The homogeneity of groups concerning patient profiles and clinical variables at baseline was assessed using linear and logistic models with GEE. The significance level was set at 5% ( p < .05). STATA version 17 (STATA Corp., TX, US) was used for all statistical analyses. This study was designed as a prospective, single‐centre, randomized controlled clinical trial to compare the quality of root canal obturation and its short‐, medium‐, and long‐term clinical outcomes in the general patient population. The trial was conducted in compliance with the principles of the Declaration of Helsinki and Good Clinical Practice after receiving approval from the Ethics Committee (PROT. PG/2017/16759, Ca, November 2017) and registered at ClinicalTrials.gov (Identifier: NCT04249206). This randomized clinical trial has been written according to Preferred Reporting Items for Randomized Trials in Endodontics (PRIRATE) 2020 guidelines (Nagendrababu et al., ) (Figure ). As the primary outcome, a mean difference between groups of 0.50 units (standard deviation 1.0) using the change in periapical index (PAI) score and a statistical power of 0.8 with a significance level of 0.05 was set. According to Pandis (Pandis, ), a minimum sample size of 41 dependent teeth with 26 participants per group was required. To account for potential dropouts throughout the study, we planned to recruit 10% more participants, resulting in a target of 46 dependent teeth and 28 patients per group. Patients who met the following inclusion criteria were enrolled from the outpatient clinic of the Department of Conservative Dentistry and Endodontics at the University Hospital between 1 May 2016 and 31 December 2017: aged between 18 and 80 years, in good health (American Society of Anesthesiologists classification I or II) (De Cassai et al., ), and with at least one permanent single‐ or multi‐rooted mature tooth with signs and/or symptoms indicating the need for endodontic treatment (primary or secondary) according to the ESE guidelines (Löst, ). The exclusion criteria were as follows: patients who did not agree to undergo RCT or participate in the study, had unrestorable teeth or teeth with poor prognosis (cracks, suspected fractures, iatrogenic perforations or resorptions, and moderate to severe periodontitis), and had teeth requiring retreatment that displayed a poor prognosis due to a visibly altered root canal morphology (Gorni & Gagliani, ). The following clinical and medical data were recorded before treatment: history of pain and responses to sensitivity tests, palpation, percussion, and periodontal probing (Berman & Hargreaves, ). At baseline, one or more periapical radiographs of Kodak ultraspeed dental film, size 31 × 41 mm (Carestream Health©, Stuttgart, Germany) and X‐safe 70 70 KV/8 mA (Cefla Medical Equipment, Imola, Italy) of the involved teeth were obtained and evaluated to assess the crown, root, and periapical status. Written informed consent to undergo treatment, follow‐up, and participate in the study was obtained from all patients prior to study enrolment. A total of 56 patients with 92 single‐ or multi‐rooted teeth fulfilled the inclusion criteria (Figure ). All treatments were performed by four endodontic residents, divided into two groups depending on the day of the week on which they rotated in the clinics. The first patient on the list was randomly assigned by the clinical supervisor to either the SC with CSBS or the WVC with ZOE sealer, based on the outcome of flipping a coin. The second patient was assigned to the alternative technique, and the alternation continued until the end of the day. Random allocation ended when 46 teeth were assigned. Patients enrolled in the study were unaware of the treatment group to which they belonged. RCTs were performed using a standardized protocol that varied only in terms of the technique and sealer used for canal obturation. Instrumentation and disinfection protocol After local anaesthesia and rubber dam isolation, an access cavity was created, and the working length was assessed using an apex locator (DentalPort ZX, J. Morita MFG. CORP©, Kyoto, Japan), which was confirmed with one or more periapical radiographs. All the relevant radiographs (including preoperative, master apical file at working length, post‐obturation, and follow‐up periapical radiographs) were taken with consistent angulation ensured by the intuitive orientation of a beam‐aiming device (Rinn; Dentsply Sirona, Ballaigues, Switzerland) (Ng et al., ). Primary RCTs were performed using NiTi ProTaper Next™ rotary instruments (Dentsply Sirona, Ballaigues, Switzerland) at 300 RMP and 4 N/cm in a crown‐down approach. Each canal was prepared to at least an X2 master apical rotary file. In secondary RCTs, the GP and sealer were removed manually using Gates‐Glidden drills and 0.1 mL of solvent (Endosolv® E, Septodont, Saint‐Maur‐des‐Fossés, France). The canals were then manually renegotiated using K‐files (Kerr© Corporation, Orange, California). Throughout the instrumentation, root canals were continuously irrigated with 5.25% sodium hypochlorite (NaOCl) using a 31‐gauge needle positioned 2 mm shorter than the working length (Niclor 5‐Dentale, Ogna Lab, Muggiò, Italy). Once mechanical instrumentation was completed, each canal was irrigated with 5 mL of 5.25% NaOCl, followed by final irrigation with 5 mL of saline solution. Root canal obturation methods Following instrumentation, the canals were dried with sterile paper points. A standardized GP master cone, snugly fitting to the working length, was selected, and the canals were obturated as follows (Figures and ): BIO group : BioRoot™ RCS was prepared according to the manufacturer's instructions. Obturation was completed by placing the GP master cone, previously coated with sealer, into the canal and removing excess GP with a heated instrument. PCS group : Pulp Canal Sealer™ EWT was prepared according to the manufacturer's instructions, and obturation was performed according to the continuous wave technique (Buchanan, ). The GP master cone coated with the sealer was placed into the canal, and a pre‐measured heated plugger (SuperEndo Alpha 2; B & L Biotech, Ansan, Korea) was inserted to cut and compact the master cone (Buchanan, ). Backfilling was performed through the thermoplastic injection of GP using SuperEndo Beta 2 (B & L Biotech). A periapical radiograph was obtained to assess the quality of root canal filling. All teeth were restored coronally using composite resins and dental adhesives (IPS Empress Direct; Ivoclar Vivadent, Schaan, Liechtenstein, Germany). Any teeth requiring permanent full cuspal coverage were restored by the referring practitioner within 1 month of completing the RCT. After local anaesthesia and rubber dam isolation, an access cavity was created, and the working length was assessed using an apex locator (DentalPort ZX, J. Morita MFG. CORP©, Kyoto, Japan), which was confirmed with one or more periapical radiographs. All the relevant radiographs (including preoperative, master apical file at working length, post‐obturation, and follow‐up periapical radiographs) were taken with consistent angulation ensured by the intuitive orientation of a beam‐aiming device (Rinn; Dentsply Sirona, Ballaigues, Switzerland) (Ng et al., ). Primary RCTs were performed using NiTi ProTaper Next™ rotary instruments (Dentsply Sirona, Ballaigues, Switzerland) at 300 RMP and 4 N/cm in a crown‐down approach. Each canal was prepared to at least an X2 master apical rotary file. In secondary RCTs, the GP and sealer were removed manually using Gates‐Glidden drills and 0.1 mL of solvent (Endosolv® E, Septodont, Saint‐Maur‐des‐Fossés, France). The canals were then manually renegotiated using K‐files (Kerr© Corporation, Orange, California). Throughout the instrumentation, root canals were continuously irrigated with 5.25% sodium hypochlorite (NaOCl) using a 31‐gauge needle positioned 2 mm shorter than the working length (Niclor 5‐Dentale, Ogna Lab, Muggiò, Italy). Once mechanical instrumentation was completed, each canal was irrigated with 5 mL of 5.25% NaOCl, followed by final irrigation with 5 mL of saline solution. Following instrumentation, the canals were dried with sterile paper points. A standardized GP master cone, snugly fitting to the working length, was selected, and the canals were obturated as follows (Figures and ): BIO group : BioRoot™ RCS was prepared according to the manufacturer's instructions. Obturation was completed by placing the GP master cone, previously coated with sealer, into the canal and removing excess GP with a heated instrument. PCS group : Pulp Canal Sealer™ EWT was prepared according to the manufacturer's instructions, and obturation was performed according to the continuous wave technique (Buchanan, ). The GP master cone coated with the sealer was placed into the canal, and a pre‐measured heated plugger (SuperEndo Alpha 2; B & L Biotech, Ansan, Korea) was inserted to cut and compact the master cone (Buchanan, ). Backfilling was performed through the thermoplastic injection of GP using SuperEndo Beta 2 (B & L Biotech). A periapical radiograph was obtained to assess the quality of root canal filling. All teeth were restored coronally using composite resins and dental adhesives (IPS Empress Direct; Ivoclar Vivadent, Schaan, Liechtenstein, Germany). Any teeth requiring permanent full cuspal coverage were restored by the referring practitioner within 1 month of completing the RCT. Clinical examinations included assessing the presence or absence of pain, swelling, sinus tract, and abscess formation. Additionally, the functionality of each tooth was evaluated. Clinical and radiographic follow‐ups were performed at 1, 3, 6, 12, 24, and 48 months for each tooth. Data were recorded in a dedicated chart and updated at every follow‐up visit. All radiographs were digitally scanned, saved in JPEG format, and imported into ImageJ software version 1.41 (National Institute of Health, Bethesda, MD, USA). Turbo Reg (Biomedical Imaging Group, Lausanne, Switzerland) was utilized to reduce the distortion factors in the radiographs (Bose et al., ). Preoperative and recall radiographs of the teeth were assigned PAI scores (Orstavik et al., ) by two blinded, trained, and calibrated examiners (Table ) (Landis & Koch, ). Any disagreements were resolved by retaining the highest possible score. For multi‐rooted teeth, the root with the highest score served as the reference. The same examiners assessed the quality of root canal obturation according to the criteria described by Ng et al. , Ng, Mann, Rahbaran, et al.  (Table ). Quality of root canal obturation The quality of the root canal obturation was evaluated based on criteria such as length, voids, and sealer extrusion (Table ). Sealer extrusion and root‐filling voids were classified as present or absent. For multi‐rooted teeth, if sealer extrusion or root‐filling voids were detected in at least one root, the teeth were categorized as having extrusions or voids. The root filling length was recorded as ‘adequate’, ‘short’, or ‘long’ (Siqueira et al., ; Sjögren et al., ). Healing After assigning a PAI score, each tooth was categorized based on clinical and radiographic assessments into the following outcome categories (Figure ): Healed: functional and asymptomatic without any sign of AP (PAI = 1). Healing: functional and asymptomatic with periapical lesions that have decreased in size (PAI >1). Diseased: non‐functional and symptomatic teeth with signs of AP (PAI >1) or asymptomatic teeth with increased periapical lesions. Functional teeth were defined as teeth without symptoms, whether with newly emerged or persisting, or without AP (Friedman & Mor, ). According to the loose criteria, both healed and healing cases were classified as successful. Strict criteria considered only healed cases as successful (Tables and ) (Ng, Mann, & Gulabivala, ). The entire tooth was assessed as a unit. If a tooth was extracted due to endodontic problems such as persistent pain, swelling, sinus tract, or periapical radiolucent lesions, the treatment outcome was considered a failure (Table ). The quality of the root canal obturation was evaluated based on criteria such as length, voids, and sealer extrusion (Table ). Sealer extrusion and root‐filling voids were classified as present or absent. For multi‐rooted teeth, if sealer extrusion or root‐filling voids were detected in at least one root, the teeth were categorized as having extrusions or voids. The root filling length was recorded as ‘adequate’, ‘short’, or ‘long’ (Siqueira et al., ; Sjögren et al., ). After assigning a PAI score, each tooth was categorized based on clinical and radiographic assessments into the following outcome categories (Figure ): Healed: functional and asymptomatic without any sign of AP (PAI = 1). Healing: functional and asymptomatic with periapical lesions that have decreased in size (PAI >1). Diseased: non‐functional and symptomatic teeth with signs of AP (PAI >1) or asymptomatic teeth with increased periapical lesions. Functional teeth were defined as teeth without symptoms, whether with newly emerged or persisting, or without AP (Friedman & Mor, ). According to the loose criteria, both healed and healing cases were classified as successful. Strict criteria considered only healed cases as successful (Tables and ) (Ng, Mann, & Gulabivala, ). The entire tooth was assessed as a unit. If a tooth was extracted due to endodontic problems such as persistent pain, swelling, sinus tract, or periapical radiolucent lesions, the treatment outcome was considered a failure (Table ). Healing was designated as the primary outcome of the study. Several secondary outcomes were evaluated, including extraction rates, length of filling, presence of voids, and sealer extrusion rate. For the subgroups BIOAP and PCSAP, the measurement of PAI score at different time points (T0: baseline, T1: 1 month, T2: 3 months, T3: 6 months, T4: 1 year, T5: 2 years, T6: 4 years), and changes in PAI values compared to baseline (T1‐T0, T2‐T0, T3‐T0, T4‐T0, T5‐T0, and T6‐T0) were considered as tertiary outcomes. The statistical analysis included descriptive statistics for categorical variables (absolute and relative frequencies) and continuous variables (mean, standard deviation, range, median, and quartiles) for the total sample and by group differentiation. Binary outcomes were compared between groups using multi‐level simple binary logistic regression with generalized estimation equations (GEE). Raw odds ratio (OR) and 95% confidence intervals were obtained from Wald's Chi 2 statistic. Quantitative outcomes were analysed by linear regression models and estimated with GEE to account for within‐subject dependence of teeth. Beta coefficients and 95% confidence intervals were reported for these analyses. The homogeneity of groups concerning patient profiles and clinical variables at baseline was assessed using linear and logistic models with GEE. The significance level was set at 5% ( p < .05). STATA version 17 (STATA Corp., TX, US) was used for all statistical analyses. Of the 100 teeth assessed for eligibility, 8 were excluded (Figure ). The original study group consisted of 56 patients with 92 randomized treated teeth. However, 25 teeth were lost to follow‐up at the 2‐ and 4‐year appointments and were subsequently excluded from the analysis (Figures and ). Accordingly, 45 patients were included in the outcome assessment, comprising 20 male (44.4%) and 25 female patients (55.6%), with an average age of 53.6 ± 16.6 years ranging from 28 to 84 years at baseline. On average, each patient contributed 1.5 teeth, resulting in a total sample of 67 teeth (73% recall rate) (Table and Figure ). The distribution of teeth was as follows: BIO group : 38 teeth in 22 patients, obturated using the SC technique and BioRoot™ RCS; PCS group : 29 teeth in 18 patients, obturated using the WVC of GP and Pulp Canal Sealer™ EWT. Teeth with AP from both groups were further divided into two subsamples: BIOAP : 27 teeth in 18 patients in the BIO group. PCSAP : 25 teeth in 17 patients in the PCS group (Table and Figure ). Analysis regarding group homogeneity revealed that BIOAP and PCSAP were homogeneous, whereas BIO and PCS were homogeneous in most variables, except for the distribution of teeth by sex. Teeth from female patients were more frequent in the BIO than in the PCS group ( p = .046) (Table ). Therefore, sex distribution was considered a potential confounder and was entered into adjusted models to control for its influence. The kappa scores for inter‐examiner and intra‐examiner agreement were 0.86 and 0.91, respectively, indicating good agreement (Landis & Koch, ). Two‐year follow‐up Based on loose criteria, the overall success rate of both groups (BIO + PCS) and subgroups (BIOAP + PCSAP) was 88.46% and 89.55%, respectively (Table ). Upon comparing BIO and PCS and BIOAP and PCSAP, similar success rates were obtained (Table ). When strict criteria were applied, the overall success rate in both groups (BIO + PCS) and subgroups (BIOAP + PCSAP) was 73.08% and 77.61%, respectively (Table ). Upon comparing BIO and PCS and BIOAP and PCSAP, similar outcomes were observed (Table ). Four‐year follow‐up Primary outcome Based on loose criteria, the total success rates of groups (BIO + PCS) and subgroups (BIOAP + PCSAP) were 89.6% and 88.5%, respectively (Table ). Comparison between BIO and PCS and BIOAP and PCSAP showed that both techniques performed similarly (OR = .54; p = .336 and OR = .92; p = .904, respectively) (Table and Figure ). Results of simple binary logistic regression for ‘BIO and PCS and other factors’ revealed that secondary treatments had the probability of success reduced to 79% (OR = .21; p = .023), pulpless teeth had the odds of success reduced with respect to necrotic teeth (OR = .28; p = .064), and symptomatic AP showed increased odds of success compared to asymptomatic AP (OR = 4.24; p = .040). In subgroups BIOAP and PCSAP, symptomatic AP had increased odds of success compared to asymptomatic AP (OR = 3.89; p = .074) (Table ). Multiple binary logistic regression with GEE model estimation did not identify any difference by group ( p = .207) or subgroup ( p = .871) (Table ). When strict criteria were considered, the total success rate of groups and subgroups decreased to 83.3% and 80.4%, respectively (Table ). Both techniques (BIO and PCS; BIOAP and PCSAP) showed similar results (OR = .48; p = .224 and OR = .86; p = .827, respectively) (Table and Figure ). In a simple binary logistic regression model by both groups and subgroups, age was detected as a significant covariate (OR = 1.04; p = .047 and OR = 1.04; p = .052, respectively) (Table ). Multiple binary logistic regression with GEE model estimation did not identify any difference by group and subgroup ( p = .166 and p = .885, respectively) (Table ). Secondary outcomes (extraction rate, length of filling, extrusion and presence of voids rate) Both techniques in the groups and subgroups had a similar extraction rate (OR = 1.87; p = .336 and OR = 1.09; p = .904, respectively) (Table , Table and Figure ). A simple binary regression by ‘BIO and PCS and other factors’ revealed that secondary treatments exhibited an increased probability of extraction (OR = 4.82; p = .023), pulpless teeth showed an increased risk of extraction compared to necrotic teeth (OR = 3.59; p = .064), and symptomatic AP showed reduced risk of extraction compared to asymptomatic AP (OR = .24; p = .040). In BIOAP and PCSAP, symptomatic AP reduced the probability of extraction (OR = .26; p = .074) (Table ). When a multiple model was estimated, no differences were found by groups ( p = .207) and subgroups ( p = .865) (Table ). Both sealers and techniques in groups and subgroups showed no difference in the length of filling, and no factors influenced the probability of having short fillings (Table and Table ). Only three teeth showed voids, all of which were in the PCS and PCSAP groups (Table ). A conventional Fisher's exact test indicated a strong tendency ( p = .076 and p = .104). Both sealers showed no differences in terms of extrusion (Table and Table ). Mandibular teeth showed a lower probability of sealer extrusion (OR = .32; p = .032 and OR = .32; p = .054, respectively) (Table ). Tertiary outcomes All teeth (100%) in both subgroups experienced a reduction of PAI from baseline to 4‐year recall (Table ). When a multiple model was estimated, no differences were found by subgroup ( p = .684) and PAI reduction was correlated with age ( p = .008), as each additional year negatively influenced this reduction (−0.02) (Table ). A non‐parametric Brunner‐Langer model for longitudinal data was conducted to study changes in PAI over time. An analysis of variance type‐test statistics was used to estimate the main effects involving time. PAI dropped significantly over time ( p < .001), but the pattern of reduction was similar in both groups ( p = .806). Additionally, no overall differences in PAI were found among groups ( p = .786), and this result extends to every time point ( p = .806) (Table ). Based on loose criteria, the overall success rate of both groups (BIO + PCS) and subgroups (BIOAP + PCSAP) was 88.46% and 89.55%, respectively (Table ). Upon comparing BIO and PCS and BIOAP and PCSAP, similar success rates were obtained (Table ). When strict criteria were applied, the overall success rate in both groups (BIO + PCS) and subgroups (BIOAP + PCSAP) was 73.08% and 77.61%, respectively (Table ). Upon comparing BIO and PCS and BIOAP and PCSAP, similar outcomes were observed (Table ). Primary outcome Based on loose criteria, the total success rates of groups (BIO + PCS) and subgroups (BIOAP + PCSAP) were 89.6% and 88.5%, respectively (Table ). Comparison between BIO and PCS and BIOAP and PCSAP showed that both techniques performed similarly (OR = .54; p = .336 and OR = .92; p = .904, respectively) (Table and Figure ). Results of simple binary logistic regression for ‘BIO and PCS and other factors’ revealed that secondary treatments had the probability of success reduced to 79% (OR = .21; p = .023), pulpless teeth had the odds of success reduced with respect to necrotic teeth (OR = .28; p = .064), and symptomatic AP showed increased odds of success compared to asymptomatic AP (OR = 4.24; p = .040). In subgroups BIOAP and PCSAP, symptomatic AP had increased odds of success compared to asymptomatic AP (OR = 3.89; p = .074) (Table ). Multiple binary logistic regression with GEE model estimation did not identify any difference by group ( p = .207) or subgroup ( p = .871) (Table ). When strict criteria were considered, the total success rate of groups and subgroups decreased to 83.3% and 80.4%, respectively (Table ). Both techniques (BIO and PCS; BIOAP and PCSAP) showed similar results (OR = .48; p = .224 and OR = .86; p = .827, respectively) (Table and Figure ). In a simple binary logistic regression model by both groups and subgroups, age was detected as a significant covariate (OR = 1.04; p = .047 and OR = 1.04; p = .052, respectively) (Table ). Multiple binary logistic regression with GEE model estimation did not identify any difference by group and subgroup ( p = .166 and p = .885, respectively) (Table ). Secondary outcomes (extraction rate, length of filling, extrusion and presence of voids rate) Both techniques in the groups and subgroups had a similar extraction rate (OR = 1.87; p = .336 and OR = 1.09; p = .904, respectively) (Table , Table and Figure ). A simple binary regression by ‘BIO and PCS and other factors’ revealed that secondary treatments exhibited an increased probability of extraction (OR = 4.82; p = .023), pulpless teeth showed an increased risk of extraction compared to necrotic teeth (OR = 3.59; p = .064), and symptomatic AP showed reduced risk of extraction compared to asymptomatic AP (OR = .24; p = .040). In BIOAP and PCSAP, symptomatic AP reduced the probability of extraction (OR = .26; p = .074) (Table ). When a multiple model was estimated, no differences were found by groups ( p = .207) and subgroups ( p = .865) (Table ). Both sealers and techniques in groups and subgroups showed no difference in the length of filling, and no factors influenced the probability of having short fillings (Table and Table ). Only three teeth showed voids, all of which were in the PCS and PCSAP groups (Table ). A conventional Fisher's exact test indicated a strong tendency ( p = .076 and p = .104). Both sealers showed no differences in terms of extrusion (Table and Table ). Mandibular teeth showed a lower probability of sealer extrusion (OR = .32; p = .032 and OR = .32; p = .054, respectively) (Table ). Tertiary outcomes All teeth (100%) in both subgroups experienced a reduction of PAI from baseline to 4‐year recall (Table ). When a multiple model was estimated, no differences were found by subgroup ( p = .684) and PAI reduction was correlated with age ( p = .008), as each additional year negatively influenced this reduction (−0.02) (Table ). A non‐parametric Brunner‐Langer model for longitudinal data was conducted to study changes in PAI over time. An analysis of variance type‐test statistics was used to estimate the main effects involving time. PAI dropped significantly over time ( p < .001), but the pattern of reduction was similar in both groups ( p = .806). Additionally, no overall differences in PAI were found among groups ( p = .786), and this result extends to every time point ( p = .806) (Table ). Based on loose criteria, the total success rates of groups (BIO + PCS) and subgroups (BIOAP + PCSAP) were 89.6% and 88.5%, respectively (Table ). Comparison between BIO and PCS and BIOAP and PCSAP showed that both techniques performed similarly (OR = .54; p = .336 and OR = .92; p = .904, respectively) (Table and Figure ). Results of simple binary logistic regression for ‘BIO and PCS and other factors’ revealed that secondary treatments had the probability of success reduced to 79% (OR = .21; p = .023), pulpless teeth had the odds of success reduced with respect to necrotic teeth (OR = .28; p = .064), and symptomatic AP showed increased odds of success compared to asymptomatic AP (OR = 4.24; p = .040). In subgroups BIOAP and PCSAP, symptomatic AP had increased odds of success compared to asymptomatic AP (OR = 3.89; p = .074) (Table ). Multiple binary logistic regression with GEE model estimation did not identify any difference by group ( p = .207) or subgroup ( p = .871) (Table ). When strict criteria were considered, the total success rate of groups and subgroups decreased to 83.3% and 80.4%, respectively (Table ). Both techniques (BIO and PCS; BIOAP and PCSAP) showed similar results (OR = .48; p = .224 and OR = .86; p = .827, respectively) (Table and Figure ). In a simple binary logistic regression model by both groups and subgroups, age was detected as a significant covariate (OR = 1.04; p = .047 and OR = 1.04; p = .052, respectively) (Table ). Multiple binary logistic regression with GEE model estimation did not identify any difference by group and subgroup ( p = .166 and p = .885, respectively) (Table ). Both techniques in the groups and subgroups had a similar extraction rate (OR = 1.87; p = .336 and OR = 1.09; p = .904, respectively) (Table , Table and Figure ). A simple binary regression by ‘BIO and PCS and other factors’ revealed that secondary treatments exhibited an increased probability of extraction (OR = 4.82; p = .023), pulpless teeth showed an increased risk of extraction compared to necrotic teeth (OR = 3.59; p = .064), and symptomatic AP showed reduced risk of extraction compared to asymptomatic AP (OR = .24; p = .040). In BIOAP and PCSAP, symptomatic AP reduced the probability of extraction (OR = .26; p = .074) (Table ). When a multiple model was estimated, no differences were found by groups ( p = .207) and subgroups ( p = .865) (Table ). Both sealers and techniques in groups and subgroups showed no difference in the length of filling, and no factors influenced the probability of having short fillings (Table and Table ). Only three teeth showed voids, all of which were in the PCS and PCSAP groups (Table ). A conventional Fisher's exact test indicated a strong tendency ( p = .076 and p = .104). Both sealers showed no differences in terms of extrusion (Table and Table ). Mandibular teeth showed a lower probability of sealer extrusion (OR = .32; p = .032 and OR = .32; p = .054, respectively) (Table ). All teeth (100%) in both subgroups experienced a reduction of PAI from baseline to 4‐year recall (Table ). When a multiple model was estimated, no differences were found by subgroup ( p = .684) and PAI reduction was correlated with age ( p = .008), as each additional year negatively influenced this reduction (−0.02) (Table ). A non‐parametric Brunner‐Langer model for longitudinal data was conducted to study changes in PAI over time. An analysis of variance type‐test statistics was used to estimate the main effects involving time. PAI dropped significantly over time ( p < .001), but the pattern of reduction was similar in both groups ( p = .806). Additionally, no overall differences in PAI were found among groups ( p = .786), and this result extends to every time point ( p = .806) (Table ). This study presents the 2‐ and 4‐year follow‐up results of a modular project assessing the outcomes of primary and secondary RCTs. The treatments involved either the SC technique with a CSBS or WVC of GP with a ZOE sealer, which serves as a classic reference treatment (Castellucci, ). We designed this randomized, controlled clinical trial to obtain early insights into the SC‐CSBS technique because there is limited clinical information on the use of new hydraulic sealers with SC obturation. Our study was based on a stringent protocol to minimize bias, which could have affected the results. This study represents the medium‐ and long‐term follow‐up phase following the initial 1‐year report (Bardini et al., ). The decision to conduct frequent recalls initially (at 1, 3, and 6 months) followed by longer‐term assessments (1, 2, and 4 years) was aimed at providing comprehensive insights into the behaviour of treated teeth. It also sought to evaluate whether the newer sealer used with the single‐cone technique could promote faster or more complete healing compared to the traditional standard (Wang, ) (Figure and Table ). This information is valuable in clinical settings, especially for teeth with extensive lesions that require prosthetic rehabilitation. The outcomes at 2 and 4 years were favourable and comparable in both treatment groups (Table and Figure ). According to loose criteria, the overall success rate in our study (Table ) was higher than that reported in a well‐designed systematic review on endodontic outcome (Ng et al., ). Meanwhile, when applying strict criteria, our success rate aligned closely with the pooled weighted success rate noted by the same authors (Table ) (Ng, Mann, Rahbaran, et al., , Ng et al., ). Factors such as voids, root canal filling length, and sealer extrusion did not show significant associations with the outcome; however, as discussed above, the clinical protocol was strictly controlled (Ng et al., ; Sjögren et al., ) (Table ). Comparing the present outcomes with those at the 12‐month recall (Bardini et al., ), the success rates based on loose criteria showed a reduction across all groups (BIO, PCS, BIOAP, and PCSAP) at the 2‐ and 4‐year follow‐ups (Table ). Notably, this decrease may have been influenced by the loss of two patients and their respective teeth during the study period. In contrast, when using strict criteria, the success rates were higher across all groups over time (Table ). This observation is consistent with previous findings indicating that the percentage of successful cases tends to improve with longer follow‐up durations (Ng et al., ). In this study, the combination of primary and secondary endodontic therapies was chosen to enhance the statistical analysis, a decision supported by literature suggesting that the periapical healing rates of secondary RCTs are only slightly lower than those of primary therapies (Farzaneh, Abitbol, & Friedman, ; Friedman et al., ; Ng et al., ). This similarity becomes even more pronounced when the teeth requiring retreatment do not exhibit visibly altered root canal morphology (Gorni & Gagliani, ). However, our study results indicate that secondary treatment remains a marginally significant negative predictive factor for the outcome of root canal therapy (Table ). All the teeth in both subgroups with AP demonstrated a similar trend of healing, as supported by a significant reduction of PAI over time (Table and Figure ). Interestingly, this reduction was inversely correlated with age (Table ), a common finding in previous studies (Ideo et al., ). However, the initial size of the lesion did not significantly influence this reduction (Ng et al., ). These data seem promising, considering that larger lesions often present greater challenges for healing (Chybowski et al., ; Ng et al., ), highlighting their potential as confounding variables in outcome evaluations (Gulabivala & Ng, ). Notably, preoperative symptomatic AP minimally affected the success and extraction rates in both subgroups (Table ). The results of the 2‐ and 4‐year follow‐up in this trial are consistent with those obtained from two previous clinical reports (Chybowski et al., ; Zavattini et al., ). Chybowski et al.  retrospectively described a healing rate of 83.1% at an average of 30.1 months in 307 teeth treated by different specialists, while Zavattini et al.  reported an overall success rate of 90% at 12 months for 53 treated teeth in a non‐randomized case–control study conducted in a university setting. However, differences in the designs of these studies did not render the articles completely comparable. Both authors (Chybowski et al., ; Zavattini et al., ) considered healed and healing cases successful, whereas we defined two levels of outcome (Bardini et al., ; Ng et al., , ). Additionally, two of the CBCSs tested in these three studies were the same (Bardini et al., ; Zavattini et al., ), whereas one was a different product (Endo Sequence Bioceramic Sealer, BC; Brasseler USA, Savannah, GA) (Chybowski et al., ). In a more recent randomized, case‐control clinical trial, the efficacy of the SC technique and a third CSBS (Endoseal TCS Maruchi, Wonju, Korea) was compared with that of WVC and AH Plus (Dentsply International Inc., York, PA, USA). The recall rate was similar to the one in this study (79%) but with a lower average follow‐up (17 months). The reported success rates for SC/CSBS and WVC/AH Plus were 94.3% and 92.3% (loose criteria) and 71.4% and 60.8% (strict criteria), respectively. Notably, a standardized instrumentation protocol was not established. The most important limitation of this study is its small number of patients, leading to reduced statistical power and increased variability in dental conditions among participants. Another potential limitation of this study is the variability in operator skills, given that four postgraduate endodontic residents performed the RCTs. However, it is worth noting that a systematic review has indicated that both postgraduate students and specialists achieve high success rates in clinical studies, regardless of whether strict or loose criteria are applied (Ng et al., ). Finally, in vitro reports have demonstrated that most of the available endodontic irrigants (NaOCl, CHX, and EDTA) may negatively affect the efficacy of CSBS (Arias‐Moliz & Camilleri, ; Donnermeyer et al., ; Razmi et al., ). Therefore, the potential interactions between the final irrigation protocol and the management of CSBSs should be considered (Sfeir et al., ). However, the clinical implications of these interactions remain unclear. In our protocol, a final rinse was performed using sterile saline before the root canal was dried and obturated. Another important aspect related to this topic is the formulation of the CSBS. According to the manufacturers, moisture from the dentinal tubules tends to initiate the setting of premixed formulations (Silva Almeida et al., ). This trial used a powder/liquid tricalcium silicate‐based sealer. This is a water‐based sealer in which the switch from cement to sealer depends on the inclusion of a water‐soluble polymer that allows the material to flow. Another aspect currently under debate is the placement of sealers in the canal. SC obturation has been reported to induce a higher void ratio compared to WVC techniques, especially in oval or wide root canals (Mancino et al., ). When this study was designed, the available information regarding sealer placement techniques was obtained from the manufacturers. This explains why in this clinical trial neither sonic/ultrasonic activation nor sealer activation/agitation, and flexible injection tips have been used to improve CSBS distribution in the root canal space (Kim et al., ). Nonetheless, our results showed high success rates for both the BIO and BIOAP. As stated by other authors, the evidence regarding these clinical protocols remains weak (Sfeir et al., ). Although further research is needed to confirm additional benefits of using bio‐inductive materials in promoting periapical healing (Gulabivala & Ng, ), our study suggests that the sealer‐based obturation technique performed with BioRootTM RCS and SC yields predictable outcomes. Based on the results of this study, there is no significant difference in the success rate between nonsurgical primary and secondary RCTs performed using either the SC technique with CSBS or the WVC technique and ZOE sealer. The use of a CSBS with the SC technique appeared to be at least as reliable as the traditional WVC technique. The results of the 2‐ and 4‐year follow‐ups are consistent with those at 12 months and warrant further validation through larger randomized clinical trials. Giulia Bardini: Conceptualization; methodology; investigation; writing—review and editing. Montse Mercade Bellido: Supervision; writing—review and editing. Giampiero Rossi‐Fedele: Methodology; data curation; formal analysis (4 years recall). Laura Casula: Formal analysis (2 years recall). Claudia Dettori: Writing. Francesca Ideo: methodology. Elisabetta Cotti: Project administration; validation; writing—original draft preparation. All authors have contributed significantly and are in agreement with the manuscript. The authors deny any conflicts of interest related to this study. This study was performed in accordance with the Declaration of Helsinki. Written informed consent was obtained from all individual study participants. Figure S1. Supporting Information. Data S1: Supporting Information. Data S2: Supporting Information. Data S3: Supporting Information.
null
fd8a354d-aa8d-440b-96d3-4576813c91fa
11695716
Biochemistry[mh]
Protein identification is essential in proteomics, with shotgun proteomics via mass spectrometry recognized as the primary method . This approach involves enzymatically digesting proteins into peptides for tandem mass spectrometry analysis, providing spectra that reveal peptide sequences and structures. Decoding amino acid sequences from these spectra is key to protein identification . Currently, database searching is the main method, with tools like SEQUEST , Mascot , MaxQuant/Andromeda , PEAKS DB , and pFind . However, these methods depend on comprehensive sequence databases, limiting their applicability in areas like monoclonal antibody sequencing , novel antigen identification , and metaproteome analysis without established databases . Over the past two decades, various de novo peptide sequencing tools have advanced the field , – . These algorithms infer amino acid compositions and modifications by analyzing mass differences between fragment ions in spectra. Early methods like PepNovo and PEAKS used the graph theory and dynamic programming approach. DeepNovo introduced a deep learning-based model, integrating CNNs for spectral peak analysis with LSTMs for sequence processing. PointNovo enhanced prediction precision with an order-invariant network, while Casanovo applied Transformer architecture, treating sequencing as a translation task. Casanovo V2 was later trained on a 30 million spectra dataset to further scale up the model performance. Recent innovations like PepNet use fully convolutional networks for speed, and GraphNovo uses graph neural networks to address missing-fragmentation issues. Despite these advances – , deep learning-based de novo sequencing in shotgun proteomics still achieves low peptide recall rates of 30–50% on standard benchmark. Currently, all deep learning models for de novo peptide sequencing are based on the autoregressive framework , meaning the generation of each amino acid is heavily reliant on its predicted predecessors, resulting in a unidirectional generation process. However, the significance of bidirectional information is paramount in peptide sequencing, as the presence of an amino acid is intrinsically linked to its neighbors in both directions . In autoregressive models, any errors in early amino acid predictions can cascade, affecting subsequent generations. Autoregressive decoding algorithms such as beam search lack the capability to retrospectively modify previously generated content, making it challenging to control the total mass of the generated sequence. This limitation arises because each token is produced based on its predecessor, meaning that altering any previously generated token would consequently shift the distribution of subsequent tokens and, therefore, require a re-generation of the whole sequence . In this research, we introduce π -PrimeNovo (shortened as PrimeNovo) (Fig. ), representing a significant departure from conventional autoregressive approaches by adopting a non-autoregressive approach to effectively address the unidirectional problems of autoregressive methods. This innovation stands as the pioneer non-autoregressive transformer-based model in this field. Such design enables a simultaneous sequence prediction, granting each amino acid a comprehensive bidirectional context. Another key advancement in PrimeNovo is the integration of a precise mass control (PMC) unit, uniquely compatible with the non-autoregressive framework, which utilizes precursor mass information to generate controlled and precise peptide sequences. This precise mass control, coupled with bidirectional generation, significantly enhances peptide-level performance. PrimeNovo consistently demonstrates impressive peptide-level accuracy, achieving an average peptide recall of 64% on the widely used nine-species benchmark dataset. This performance significantly surpasses the existing best model, which achieves a peptide recall of 54% . Across a diverse range of other MS/MS datasets, PrimeNovo consistently maintains a notable advantage in peptide recall over the state-of-the-art model, achieving relative improvements from 16% to even doubling the accuracy, which highlights its exceptional performance and reliability. Moreover, by avoiding the sequential, one-by-one generation process inherent in autoregressive models, PrimeNovo also substantially increases its inference speed. This acceleration is further enhanced through the use of dynamic programming and CUDA-accelerated computation, allowing PrimeNovo to surpass the existing autoregressive models by up to 89 times. This speedup advantage enables PrimeNovo to make accurate predictions on large-scale spectrum data. We have demonstrated that PrimeNovo excels in large-scale metaproteomic research by accurately identifying a significantly greater number of species-specific peptides compared to previous methods, reducing the processing time from months, as required by Casanovo V2 with beam search, to just days. Furthermore, PrimeNovo’s versatility extends to the identification of PTMs, showcasing its potential as a transformative tool in proteomics research. PrimeNovo sets a benchmark with 64% peptide recall, achieving over 10% improvement in widely used nine-species dataset Echoing the approach of Casanovo V2, we utilized the large-scale MassIVE-KB dataset , featuring around 30 million peptide-to-spectrum matches (PSMs), as our training data. PrimeNovo was then evaluated on the nine-species testing benchmark directly. It is crucial to note, however, that baseline models like PointNovo, DeepNovo, and Casanovo were originally trained using the leave-one-species-out cross-validation (CV) strategy on the nine-species dataset. This strategy involves training on eight species and evaluating on the ninth each time. To facilitate a fair comparison, we also trained PrimeNovo on the nine-species dataset using the same CV strategy, following the data split used by all other baseline models. As shown in Fig. a, PrimeNovo CV outperformed other baseline models trained with this strategy by a large margin. Notably, even when trained solely on the nine-species benchmark dataset, PrimeNovo CV already matched the performance of Casanovo V2, which is the model trained on the large-scale MassIVE-KB dataset. When trained on the MassIVE-KB dataset, PrimeNovo set state-of-the-art results across all species in the nine-species benchmark (Fig. b and Supplementary Fig. ). The average peptide recall improved significantly, increasing from 45% with Casanovo to 54% with Casanovo V2, and further to 64% with PrimeNovo. This marks a 10% improvement over Casanovo V2 and a 19% increase over Casanovo. In the recall-coverage curve (Fig. a), PrimeNovo consistently held the top position across all coverage levels and species, reaffirming its status as a leading model in de novo peptide sequencing. At the amino acid (AA) level, PrimeNovo demonstrates significantly higher accuracy, as measured by both AA recall and AA precision, compared to Casanovo V2. As shown in Fig. c, PrimeNovo outperforms Casanovo V2 in AA recall across all nine species, with an improvement ranging from 3% to 6%. This performance advantage is consistent in AA precision, with a detailed comparison provided in the Supplementary Information. Additionally, we tested PrimeNovo on a revised nine-species test set introduced by Casanovo V2 , which featured higher data quality and a larger quantity of spectra, covering a wider range of data distributions for each species. In this updated test, PrimeNovo’s average peptide recall soared to 75% across all species, from the previous 65% by Casanovo V2. A detailed comparison of these results is available in Supplementary Fig. . The outcomes from both the original and revised nine-species benchmark datasets highlight PrimeNovo’s capability to accurately predict peptides across various species, demonstrating its effectiveness and versatility. PrimeNovo, leveraging its bi-directional information integration and parallel generation process as a non-autoregressive model, convincingly establishes its superiority across various facets of sequencing tasks, transcending mere high prediction accuracy. Firstly, our non-autoregressive model offers a substantial improvement in the inference speed compared to the autoregressive models of similar sizes, thanks to its concurrent generation process. As depicted in Fig. d, PrimeNovo, even without the Precise Mass Control (PMC) unit, achieves a staggering speed advantage of 3.4 times faster over Casanovo V2 without beam search decoding under identical testing conditions (i.e., using the same machine with identical CPU and GPU specifications). Upon incorporating post-prediction decoding strategies (PMC for PrimeNovo and beam search for Casanovo V2), PrimeNovo’s advantage in inference speed becomes even more pronounced, making it over 28 times faster than Casanovo V2. Notably, considering that PrimeNovo without PMC can already outperform Casanovo V2 with beam search by an average of 6% on the nine-species benchmark dataset (as demonstrated in Fig. b), users can experience a maximum speedup of 89 times while making only minimal sacrifices in prediction accuracy when PMC is not deployed. We further investigated other factors, such as batch size on the speed and the results are included in Supplementary Information. Furthermore, PrimeNovo exhibits exceptional prediction robustness across various challenges, including different levels of missing peaks in the spectrum, varying peptide lengths, and amino acid combinations that are prone to confusion. To illustrate this robustness, we categorized predictions on the nine-species benchmark dataset based on the degree of missing peaks in the input spectrum and the number of amino acids in the target peptide. The calculation of missing peaks in each spectrum follows the methodology outlined in a previous study by Beslic et al. , where we compute all the theoretical m / z values for potential y ions and b ions based on the true label and determine how many of these theoretical peaks are absent in the actual spectrum. As presented in Fig. e, it is not surprising to observe a decline in prediction accuracy as the number of missing peaks in the spectra increases. However, PrimeNovo consistently indicates superior performance across all levels of missing peaks and consistently outperforms Casanovo V2. Similarly, Fig. f illustrates that PrimeNovo maintains its higher accuracy compared to Casanovo V2, irrespective of the length of the peptide being predicted. In Fig. g, we further observe that PrimeNovo excels in accurately predicting amino acids that are challenging to identify due to their closely similar mass (<0.05 Da) to other amino acids. Specifically, the aa precision of all four similar amino acids is more than 10% more accurate on average compared to that of Casanovo V2. Specifically, the precision advantage is more than 18% on both K and Oxidized M amino acids. We then conducted an ablation study to investigate the performance gains achieved by each component of our model on the nine-species benchmark dataset. From Fig. h, we observe a 2% improvement in peptide recall when transitioning from an autoregressive model to a non-autoregressive model. The gain in performance is magnified by a large amount (7%) when PMC is introduced, as controllable generation is important in such tasks and improves the accuracy of our generated sequence. Remarkably, the performance boost from the non-autoregressive model is most pronounced when transitioning from the CV training data to the MassIVE-KB dataset, as the substantial increase in training data proves invaluable for learning the underlying bi-directional patterns in the sequencing task. Lastly, we see that utilizing PMC with augmented training data achieves the highest prediction accuracy, which further demonstrates PMC’s importance under different data availability situations. PrimeNovo exhibits strong generalization and adaptability capability across a wide array of MS/MS data sources As MS/MS data can vary significantly due to differences in biological samples, mass spectrometer parameters, and post-processing procedures, there is often a substantial degree of distributional shift across various MS/MS datasets. To demonstrate PrimeNovo’s ability to generalize effectively across a wide spectrum of distinct MS/MS data for diverse downstream tasks, we conducted an evaluation of PrimeNovo’s performance on some of the most widely used publicly available MS/MS datasets. We then compared the results with those of the current state-of-the-art models, Casanovo and Casanovo V2. In addition to the nine-species benchmark dataset discussed earlier, we selected three prominent MS/MS datasets that represent varying data sources and application settings: the PT , IgG1-Human-HC , and HCC datasets, and the details of these datasets are included in the Supplementary Information. We start by evaluating PrimeNovo’s ability to perform well in a zero-shot scenario, which means the model is tested without any specific adjustments to match the characteristics and distribution of the target dataset. As depicted in Fig. a and Supplementary Fig. , PrimeNovo exhibits significant performance superiority over both Casanovo V2 and Casanovo in terms of peptide recall when directly tested on three distinct datasets. Specifically, PrimeNovo outperforms Casanovo V2 by 13%, 14%, and 22% on PT, IgG1-Human-HC, and HCC datasets, respectively. This performance gap widens to 30%, 43%, and 38% when compared to Casanovo. For the IgG1-Human-HC dataset, following , we present the evaluation results for each human antigen type, as illustrated in Fig. b. PrimeNovo consistently outperforms Casanovo V2 across all six antigen types, achieving increased peptide recall ranging from 9% to 20%. We further examine the amino acid level accuracy on the unseen dataset. From Fig. c, it’s notable that PrimeNovo has a dominant AA level precision advantage over Casanovo V2 across all confidence levels of the model output. This indicates PrimeNovo’s better prediction of amino acids’ presence and locations. To further assess the performance disparities under the zero-shot setting, we leveraged identified PSMs from MaxQuant in each dataset as the benchmark. Then we compared the number of overlapping PSMs between the predicted PSMs generated by each de novo algorithm and the PSMs identified by MaxQuant. As displayed in Fig. d, Casanovo performed poorly on the HCC dataset, with only 8 PSMs overlapping with MaxQuant. In contrast, Casanovo V2 identified 9050 overlapping PSMs, while PrimeNovo predicted up to 22499 PSMs that perfectly matched those identified by MaxQuant. On the PT dataset, PrimeNovo, Casanovo V2, and Casanovo had 34747, 26591, and 16814 overlapping PSMs with MaxQuant search results, respectively. PrimeNovo demonstrates a much more consistent prediction behavior, aligning closely with high-quality traditional database-searching peptide identification software. Next, we examine how well PrimeNovo generalizes under the fine-tuning setting, which involves quickly adapting the model to new training data from the target distribution without starting the training process from scratch. This approach allows the model to leverage its previously acquired knowledge from the large dataset it was originally trained on and apply it to a more specific task or domain with only a minimal amount of additional training. We fine-tuned PrimeNovo on both the PT and HCC training datasets to assess the model’s adaptability. In order to gauge the impact of the quantity of additional data on fine-tuning performance, we conducted the fine-tuning with 100, 1000, 10,000, and 100,000 additional data points, respectively. We also fine-tuned Casanovo V2 under identical settings to compare the adaptability of the two models fairly. As depicted on the right side of Fig. e, augmenting the amount of additional data for fine-tuning does indeed enhance the model’s prediction accuracy on the corresponding test set, as the model gains a better understanding of the distributional nuances within the data. In comparison, PrimeNovo demonstrates a more robust ability to adapt to new data distributions and achieves higher accuracy after fine-tuning compared to the zero-shot scenario. It consistently outperforms Casanovo V2 when subjected to the same fine-tuning conditions, with 18% and 12% higher peptide level recall on HCC and PT test sets respectively when the fine-tuning reaches the best performance (Fig. e). It is noteworthy that a noticeable improvement in prediction accuracy is only observed after incorporating 10,000 additional MS data points during the fine-tuning process, indicating a recommended data size for future fine-tuning endeavors involving other data distributions. It’s important to note that the fine-tuning process can lead the model to forget the original data distribution from the training set, which is referred to as catastrophic forgetting. As illustrated in the left part of Fig. e, when fine-tuning is conducted exclusively with the target data, the performance in the nine-species benchmark dataset experiences a significant and gradual decline as more data samples are included (indicated by the dashed line). However, when the target data is mixed with the original training data, catastrophic forgetting is mitigated, as evident from the dashed line in the right part of Fig. e. Indeed, fine-tuning exclusively with the target data does introduce a relatively higher performance gain in the target test set compared to fine-tuning with mixed data (solid line in Fig. e), where the difference can be as much as 15% when the amount of the new data used for fine-tuning is large. By fine-tuning the model using a single dataset and then testing it on others, we can explore the similarities and disparities in data distributions among different pairs of datasets. This approach provides valuable insights into how closely related each MS/MS dataset is to the others and the extent to which a model’s knowledge can be transferred when trained on one dataset. In Fig. g, it’s not surprising to observe that the model exhibits the strongest transferability when the training and testing data share the same data source. Notably, MassIVE-KB, the training set for both our model and Casanovo V2, demonstrates the highest average peptide recall of 65% across all other test sets. This can be attributed to the diverse range of MS/MS data sources encompassed within the MassIVE-KB dataset, covering a wide spectrum of distinct MS/MS data. The PT dataset, with an average peptide recall of 56%, is also considered a high-quality dataset with robust transferability. It has been employed in the training of numerous other de novo models . However, the models trained on the HCC and nine-species benchmark datasets do not generalize well to other testing datasets. The nine-species benchmark exclusively covers MS/MS data for the included nine species and has a relatively small data size, while the HCC dataset is specific to human hepatocellular carcinoma. Additionally, we observe that models trained with the nine-species benchmark dataset and MassIVE-KB datasets exhibit relatively poor performance when applied to the HCC dataset, suggesting a notable disparity in their data distribution. Finally, we conduct a comparative analysis between PrimeNovo and concurrent approaches in de novo sequencing to illustrate the advancements and effectiveness of our method. Our comparative models, namely GraphNovo, a graph-based neural network, and PepNet, a CNN-based neural network, approach the problem from distinct angles, utilizing the latest deep learning techniques. It’s worth noting that both GraphNovo and PepNet are trained on their own designated training and testing datasets for their respective model versions. Consequently, we adopt a zero-shot evaluation approach, testing PrimeNovo on each of their test sets and comparing the results with their reported performances. We carefully examined the used data and ensured that there was no overlap between our training dataset and the test sets used by GraphNovo and PepNet. For the 3-species test set employed by GraphNovo, PrimeNovo demonstrates remarkable improvements in peptide recall, surpassing GraphNovo by 13%, 13%, and 11% in the A. thaliana , C. elegans , and E. coli species, respectively (see Fig. f). Furthermore, when tested on the PepNet test set, PrimeNovo exhibits a notable advantage of 14% and 24% in peptide recall over PepNet when predicting the peptide with charges of 2 and 3 respectively, detailed results of which are in Supplementary Fig. . PrimeNovo’s behavior analysis reveals an effective error correction mechanism behind non-autoregressive modeling and PMC unit To gain a comprehensive understanding of the model’s behavior and to analyze how PrimeNovo utilizes the spectrum data to arrive at its final results, we employ some of the most recent model interpretability techniques, examining each component of our model in detail. We commence by visualizing the attention behavior of the encoder network in PrimeNovo and comparing it to that of Casanovo V2. The encoder’s role is critical, as it is responsible for feature extraction from the spectrum, significantly influencing how well the model utilizes input spectrum data. As depicted in the attention map in Fig. a, it is evident that Casanovo V2 predominantly assigns most of its attention weights to the first input position (the special token added at the beginning of the peak tokens). Attention weights for the remaining tokens are sparse, insignificant, and primarily concentrated along the diagonal direction. This behavior suggests that Casanovo V2 encodes information primarily within its special token, with limited utilization of other peak positions. In contrast, PrimeNovo exhibits a well-distributed attention pattern across different input peaks, each with varying levels of information density. Furthermore, we observe that the attention of PrimeNovo is more heavily allocated to peaks corresponding to the b-y ions of the true label, which are among the most crucial pieces of information for decoding the spectrum (as detailed in Supplementary Fig. ). This highlights PrimeNovo’s capacity to extract information more effectively from tokens it deems essential, and this behavior remains consistently active across all nine layers. Furthermore, we conducted a numerical comparison of the Value matrices learned by the encoder networks of both models . Each column in the Value matrix projection represents a hidden feature. To assess the diversity of features present in the Value matrix, we calculated the average cosine similarity between every pair of columns. As illustrated in the bar plot in Fig. a, it is evident that PrimeNovo’s feature vectors exhibit lower similarity to each other, as indicated by the lower average cosine similarity values in the plot. This suggests that our model’s Value matrix encompasses a broader spectrum of information and a more diverse set of features , . This finding could provide an additional explanation for our model’s superior performance. For a more comprehensive assessment of the orthogonality of the Value matrix projection, which is evaluated by measuring the norm of the Gram matrix – (see Supplementary Fig. ). Since our non-autoregressive model predicts the entire sequence at once, we can examine how each of the nine model layers progressively improves the overall sequence prediction. We decode the whole sequence from each layer of our model and observe how the amino acids evolve over time. As illustrated in Fig. c, amino acid-level accuracy experiences a significant surge from layer seven to nine, with a consistent increasing trend across each layer. This signifies a continual improvement in prediction accuracy at each layer. By examining the case study presented in Fig. b, we discern that this increase in accuracy is achieved through a layer-wise self-correction mechanism. In this process, each layer gradually adjusts the erroneously predicted amino acids throughout the entire sequence, making them more reasonable and closer to the true answer. The non-autoregressive model’s capability of enabling each amino acid to reference the surrounding amino acids for information facilitates accurate and effective correction across its layers. PMC, acting as the final safeguard against errors, rectifies model prediction errors by selecting the most probable sequence that adheres to the mass constraint. This process yields a slightly modified sequence compared to the output from the last layer, ultimately leading to the correct answer. We also employed the feature contribution technique saliency maps to analyze the impact of each peak on the prediction results. This technique generates contribution scores that provide a quick view of the impact of each peak on the prediction. A higher contribution score for a peak indicates a larger impact on the results. On the test set of PT, the contribution scores for all peaks in each spectrum. Subsequently are calculated, we sorted all peaks in descending order based on their contribution scores and selected the top 10 peaks. Using the known peptide sequences associated with these spectra, all possible fragment ions considering only 1+ and 2+ ions, are generated using the in-house script (see Supplementary Note for more details). We then compared the m / z values of the top 10 peaks with the m / z values of all possible fragment ions, considering a match if the difference was within 0.05 Da. Finally, the percentage of the top 10 peaks that could be matched is calculated. As shown in Fig. d, ~40% of the spectra had a matched percentage of above 50%. Importantly, our model not only focused on the major peaks but also considered internal fragment ions. For example (Fig. e), in the spectrum corresponding to the peptide sequence SLEDLIFESLPENASHKLEVR, among the top 10 peaks with the highest contribution scores, seven were b ions, while the remaining three corresponded to intermediate fragment ions FE (( b 8− c 6)+), LIFES (( b 9− b 4)+), and PEN (( x 11− x 8)+), respectively. These results demonstrate that our model learned a few informative peaks from the spectra, which are useful for peptide inference. To analyze which peak in the spectrum led to the erroneous generation of the model, we visualized the spectrum by highlighting b - y ion peaks corresponding to the model’s predictions. As shown in Fig. f, Casanovo V2’s predicted sequence predominantly aligns its y -ions with input spectrum peaks, with very few calculated b -ions aligning with input peaks. This behavior is a consequence of the autoregressive model’s prediction direction from right to left, making it more natural to choose y -ion peaks for forming predictions. However, given the presence of noise in the spectrum, this prediction approach can lead to errors when y -ions are inaccurately selected, as demonstrated in Fig. f. In contrast, PrimeNovo’s predictions exhibit an alignment with both b -ions and y -ions in the input spectrum. This is due to our model’s prediction process, which leverages information from both directions, allowing it to effectively utilize the peak information from both ends of the sequence. Furthermore, we conducted a detailed analysis to identify the specific peak responsible for prediction errors in the last layer. This is achieved by calculating a gradient-based contribution score for each input peak, serving as a robust indicator of which input has a greater impact on the output, determined by the magnitude of the gradient. As observed in the left corner of Fig. f, the highest contribution scores across the entire spectrum coincide precisely with the peak corresponding to PrimeNovo’s incorrectly predicted b -ion, and this critical information is captured and corrected by our PMC unit. PrimeNovo demonstrates exceptional performance in taxon-resolved peptide annotation, enhancing metaproteomic research We conducted an evaluation to gauge PrimeNovo’s proficiency in enhancing the identification of taxon-unique peptides, particularly in the context of metaproteomic research. The field of metaproteomics poses significant challenges when it comes to taxonomic annotation, primarily due to the vast diversity within microbiomes and the presence of closely related species that share high protein sequence similarity. Consequently, increasing the number of unique peptides represents a crucial approach for achieving precision in taxonomic annotations. In our assessment, we turned to a metaproteomic dataset obtained from gnotobiotic mice, hosting a consortium of 17 pre-defined bacterial strains (as summarized in Supplementary Table ). Within this dataset, we applied PrimeNovo and Casanovo V2 to sequence unidentified MS/MS spectra through database search, all without the need for fine-tuning . It’s worth noting that we are using Casanovo V2 without Beam Search (BS) due to the estimated inference time with BS exceeding 4000 A100 GPU hours on this large-scale dataset, which amounts to more than 21 days of inference with 8 A100 GPUs. As illustrated in Fig. a, PrimeNovo exhibits superior performance compared to Casanovo V2, identifying a significantly higher number of PSMs (8446 vs. 4072) and peptides (3157 vs. 1412) following the rigorous quality control process T, resulting in a relative increase of 107% and 124%, respectively. Furthermore, PrimeNovo excels in enhancing taxonomic resolution, outperforming Casanovo V2 in the detection of taxon-specific peptides. Notable increases are observed in bacterial-specific (1047 vs. 520), phylum-specific (828 vs. 399), genus-specific (511 vs. 241), and species-specific (215 vs. 92) peptides (Fig. b–d). Particularly noteworthy is the high identification accuracy achieved by PrimeNovo, where all identified peptides are correctly matched to known species, while Casanovo V2 exhibits one incorrect matching at the genus level (Fig. c). We further conducted an analysis of high-confidence identification results under the quality control process T. PrimeNovo demonstrated a significant increase in both PSM and peptide identifications, with a 66% increase (513,590 vs. 308,499) in PSMs and a 46% increase (58,392 vs. 39,866) in peptides. This result is further emphasized by the higher identifications of taxon-unique peptides achieved by PrimeNovo, surpassing Casanovo V2 in several categories, including bacterial-specific (36,704 vs. 24,349), phylum-specific (30,652 vs. 19,866), genus-specific (17,332 vs. 10,906), and species-specific (6848 vs. 4209) peptides (Fig. e). Subsequently, we assessed the models’ performance in taxonomic annotation at the protein level, which is crucial for enhancing the taxonomic resolution and contributing to subsequent research in the taxon-function network. As depicted in Supplementary Fig. , proteins identified by PrimeNovo and Casanovo were correctly assigned to 10 genera, 14 species, and 20 COG (Clusters of Orthologous Groups of proteins) categories. On the genus level, PrimeNovo identified a total of 6,883 proteins assigned to the 10 genera, with 6709 of them annotated to specific COG functions. In contrast, Casanovo V2 identified only 5028 proteins, with 4896 of them annotated. Thus, PrimeNovo achieved a 36.89% and 37.03% increase over Casanovo V2 in taxon and functional annotations. Furthermore, a detailed examination at the genus level revealed that PrimeNovo increased the number of proteins assigned to each genus compared to Casanovo V2: Bacteroides (4926 vs. 3623), Clostridium (3 vs. 2), Collinsella (486 vs. 383), Escherichia (91 vs. 62), Monoglobus (294 vs. 197), Odoribacter (297 vs. 204), Parabacteroides (576 vs. 425), Phocaeicola (204 vs 130), Ruminococcus (3 vs. 1), Ruthenibacterium (3 vs. 1). Similarly, PrimeNovo exhibited significant potential for taxonomic annotation at the species level. Compared to Casanovo V2, PrimeNovo identified an additional 45.32% (3136 vs. 2158) of proteins assigned to the 14 species, with 45.03% (3034 vs. 2092) of these proteins annotated to specific COG functions. These results demonstrate that PrimeNovo significantly enhances taxonomic resolution at both the peptide and protein levels, highlighting its substantial potential in metaproteomic research. PrimeNovo enables accurate prediction of a wide range of different post-translation modifications PTMs play a crucial role in expanding the functional diversity of the proteome , going well beyond the inherent capabilities of the genetic code. The primary challenge lies in the underrepresentation of modified peptides within the dataset, especially those that have not been enriched for certain modifications. The detection of such peptides is often overshadowed by the more prevalent unmodified peptides. Moreover, the distinct physical properties of modified residues—namely their mass and ionization efficiency—further complicate the detection – . The capabilities of current database search engines are limited, permitting the consideration of only a select few modifications. This scarcity leads to a low presence of modified peptides in the training data, thereby making it difficult for models to accurately identify diverse PTMs from spectral data. To address these challenges, PrimeNovo has been advanced in predicting peptide sequences with multiple PTMs, establishing itself as a foundational model divergent from conventional methods that start anew for each PTM type. By fine-tuning enriched PTM data, PrimeNovo gains extensive exposure to multiple PTM types while retaining its ability to recognize standard peptides. Architectural adjustments, as illustrated in Fig. a, including the addition of a classification head above the encoder to identify specific PTMs and a newly initialized linear layer above the decoder, enhance PrimeNovo’s ability to decode peptides with PTMs, broadening the model’s token repertoire. The final loss is formulated in a multi-task setting, combining the peptide decoding loss with a binary classification task for PTM identification loss. Our training methodology employed a dataset encompassing 21 distinct PTMs, referred to as the 21PTMs dataset, as detailed in ref. . We fine-tuned PrimeNovo for each PTM to ascertain its proficiency in peptide generation and PTM classification, in accordance with previously described methods. To ensure dataset balance, we included an approximately equal number of peptides with and without PTMs, culminating in a total of 703,606 PSMs for the dataset. The comprehensive fine-tuning endeavor across the 21 PTMs allows PrimeNovo to discern a broad spectrum of PTMs, a capability evidenced by the exemplary performance metrics for each PTM category depicted in Fig. c. Specifically, the classification accuracies for all PTMs exceeded 95%, except asymmetric and symmetric Dimethylation at Arginine (R), and Monomethylation at Arginine (R), which have classification accuracies of 77%, 77%, and 69%, respectively. Excluding Monomethylation at Arginine (R), which recorded a peptide recall rate of 48%, the de novo sequencing recall for peptides with the other 20 PTMs exceeded 61%. Such peptide recall levels are on par with performance in other datasets without special PTMs, such as an average peptide recall of 64% across nine-species datasets. Detailed insights into the classification accuracy and peptide recall for each PTM are provided in the supplementary Fig. . To assess PrimeNovo’s inference performance on PTMs within a more applied context, we selected a phosphorylation dataset from Xu et al. (denote as the 2020-Cell-LUAD dataset), which concentrates on Human Lung Adenocarcinoma with 103 LUAD tumors and their corresponding non-cancerous adjacent tissues. It offers both phosphorylation-enriched and non-enriched data. We randomly selected a portion (3389 PSMs) of the enriched data for testing and the rest for training, checking of no overlapping peptide sequence between the training and testing sets. We fine-tuned PrimeNovo on such training data and the test results demonstrate that PrimeNovo distinguishes between phosphorylated and non-phosphorylated spectra with a classification accuracy of 98% and achieves a peptide recall rate of 66% on both cancer tissue data and non-cancerous adjacent tissues test data, as detailed in Supplementary Table . To assess PrimeNovo’s capability to identify modified peptides within non-enriched proteomic datasets, we deployed it for the analysis of unidentified MS/MS spectra from the non-enriched 2020-Cell-LUAD dataset, notably without conducting dataset-specific fine-tuning. Given the absence of peptide identifications from existing databases in this dataset, we relied on the model’s confidence scores to select 300 high-quality predicted peptides. We then undertook a comparative analysis between the theoretical spectrum, as generated by DeepPhosPho , and the original input spectrum corresponding to these peptides, as illustrated in Fig. b. Through this process, we pinpointed 12 peptides as candidates for synthesis validation and further functional investigation. The details of the selection methodology are elaborated upon in Supplementary Note . All 12 phosphopeptides predicted by PrimeNovo from non-enriched data were validated using their synthetic counterparts, as depicted in Fig. and Supplementary Figs. and . In Fig. d, e, they showcase the alignment between theoretical and experimental spectra for two representatives of 12 synthesized phosphorylated peptides. The comparison reveals a strong correspondence between the predicted b-ions and y-ions peaks and the experimental spectrum’s signal peaks, evidenced by a Pearson correlation exceeding 0.90 for nine paired spectra, and 0.70, 0.72, and 0.86 for the remaining three pairs. This correlation underscores the model’s high predictive precision. Further investigation into the proteins associated with these phosphopeptides highlighted their relevance to lung adenocarcinoma (LUAD). For example, the peptide LGpSGFSLTR (2+) (Fig. d) from Filamin-C (FLNC) aligns with findings that the ITPKA and Filamin C interaction fosters a dense F-actin network, enhancing LUAD cell migration . Another identified peptide, HGpSDPAFAPGPR (2+) from FAM83H (Fig. e), is noted for being upregulated in LUAD, indicating a potential prognostic marker of LUAD , . Additionally, peptides WLDEpSDAEMELR, GPAGEAGApSPPVR, and AQpTPPGPSLSGSK reveal proteins (HACD3, SNTB2, and SRRM2) not previously associated with LUAD, but there are studies suggesting potential relevance between these three proteins and other cancer types. This offers directions for potential biological research on the disease by examining the above-relevant proteins. For detailed results concerning the remaining peptides and the comprehensive experimental methodologies used for their synthesis and analysis, please see Supplementary Note . These results demonstrate that PrimeNovo has a high sensitivity in detecting PTMs from proteomic datasets, especially those non-enriched ones, which provides a solution for low-abundance PTM discovery. Peptide sequencing is vital for understanding protein structures and functions. This work introduces PrimeNovo, a Transformer-based model for fast, accurate de novo peptide sequencing. Using a non-autoregressive architecture and a precise PMC decoding unit, PrimeNovo achieves state-of-the-art performance across spectrum datasets. Its speed and adaptability make it ideal for large-scale sequencing, with robust performance in zero-shot and fine-tuning scenarios. PrimeNovo excels in metaproteomic peptide annotation, aiding microorganism identification and functional analysis, while its PTM detection capability after finetuning enables the discovery of peptides beyond traditional methods. Echoing the approach of Casanovo V2, we utilized the large-scale MassIVE-KB dataset , featuring around 30 million peptide-to-spectrum matches (PSMs), as our training data. PrimeNovo was then evaluated on the nine-species testing benchmark directly. It is crucial to note, however, that baseline models like PointNovo, DeepNovo, and Casanovo were originally trained using the leave-one-species-out cross-validation (CV) strategy on the nine-species dataset. This strategy involves training on eight species and evaluating on the ninth each time. To facilitate a fair comparison, we also trained PrimeNovo on the nine-species dataset using the same CV strategy, following the data split used by all other baseline models. As shown in Fig. a, PrimeNovo CV outperformed other baseline models trained with this strategy by a large margin. Notably, even when trained solely on the nine-species benchmark dataset, PrimeNovo CV already matched the performance of Casanovo V2, which is the model trained on the large-scale MassIVE-KB dataset. When trained on the MassIVE-KB dataset, PrimeNovo set state-of-the-art results across all species in the nine-species benchmark (Fig. b and Supplementary Fig. ). The average peptide recall improved significantly, increasing from 45% with Casanovo to 54% with Casanovo V2, and further to 64% with PrimeNovo. This marks a 10% improvement over Casanovo V2 and a 19% increase over Casanovo. In the recall-coverage curve (Fig. a), PrimeNovo consistently held the top position across all coverage levels and species, reaffirming its status as a leading model in de novo peptide sequencing. At the amino acid (AA) level, PrimeNovo demonstrates significantly higher accuracy, as measured by both AA recall and AA precision, compared to Casanovo V2. As shown in Fig. c, PrimeNovo outperforms Casanovo V2 in AA recall across all nine species, with an improvement ranging from 3% to 6%. This performance advantage is consistent in AA precision, with a detailed comparison provided in the Supplementary Information. Additionally, we tested PrimeNovo on a revised nine-species test set introduced by Casanovo V2 , which featured higher data quality and a larger quantity of spectra, covering a wider range of data distributions for each species. In this updated test, PrimeNovo’s average peptide recall soared to 75% across all species, from the previous 65% by Casanovo V2. A detailed comparison of these results is available in Supplementary Fig. . The outcomes from both the original and revised nine-species benchmark datasets highlight PrimeNovo’s capability to accurately predict peptides across various species, demonstrating its effectiveness and versatility. PrimeNovo, leveraging its bi-directional information integration and parallel generation process as a non-autoregressive model, convincingly establishes its superiority across various facets of sequencing tasks, transcending mere high prediction accuracy. Firstly, our non-autoregressive model offers a substantial improvement in the inference speed compared to the autoregressive models of similar sizes, thanks to its concurrent generation process. As depicted in Fig. d, PrimeNovo, even without the Precise Mass Control (PMC) unit, achieves a staggering speed advantage of 3.4 times faster over Casanovo V2 without beam search decoding under identical testing conditions (i.e., using the same machine with identical CPU and GPU specifications). Upon incorporating post-prediction decoding strategies (PMC for PrimeNovo and beam search for Casanovo V2), PrimeNovo’s advantage in inference speed becomes even more pronounced, making it over 28 times faster than Casanovo V2. Notably, considering that PrimeNovo without PMC can already outperform Casanovo V2 with beam search by an average of 6% on the nine-species benchmark dataset (as demonstrated in Fig. b), users can experience a maximum speedup of 89 times while making only minimal sacrifices in prediction accuracy when PMC is not deployed. We further investigated other factors, such as batch size on the speed and the results are included in Supplementary Information. Furthermore, PrimeNovo exhibits exceptional prediction robustness across various challenges, including different levels of missing peaks in the spectrum, varying peptide lengths, and amino acid combinations that are prone to confusion. To illustrate this robustness, we categorized predictions on the nine-species benchmark dataset based on the degree of missing peaks in the input spectrum and the number of amino acids in the target peptide. The calculation of missing peaks in each spectrum follows the methodology outlined in a previous study by Beslic et al. , where we compute all the theoretical m / z values for potential y ions and b ions based on the true label and determine how many of these theoretical peaks are absent in the actual spectrum. As presented in Fig. e, it is not surprising to observe a decline in prediction accuracy as the number of missing peaks in the spectra increases. However, PrimeNovo consistently indicates superior performance across all levels of missing peaks and consistently outperforms Casanovo V2. Similarly, Fig. f illustrates that PrimeNovo maintains its higher accuracy compared to Casanovo V2, irrespective of the length of the peptide being predicted. In Fig. g, we further observe that PrimeNovo excels in accurately predicting amino acids that are challenging to identify due to their closely similar mass (<0.05 Da) to other amino acids. Specifically, the aa precision of all four similar amino acids is more than 10% more accurate on average compared to that of Casanovo V2. Specifically, the precision advantage is more than 18% on both K and Oxidized M amino acids. We then conducted an ablation study to investigate the performance gains achieved by each component of our model on the nine-species benchmark dataset. From Fig. h, we observe a 2% improvement in peptide recall when transitioning from an autoregressive model to a non-autoregressive model. The gain in performance is magnified by a large amount (7%) when PMC is introduced, as controllable generation is important in such tasks and improves the accuracy of our generated sequence. Remarkably, the performance boost from the non-autoregressive model is most pronounced when transitioning from the CV training data to the MassIVE-KB dataset, as the substantial increase in training data proves invaluable for learning the underlying bi-directional patterns in the sequencing task. Lastly, we see that utilizing PMC with augmented training data achieves the highest prediction accuracy, which further demonstrates PMC’s importance under different data availability situations. As MS/MS data can vary significantly due to differences in biological samples, mass spectrometer parameters, and post-processing procedures, there is often a substantial degree of distributional shift across various MS/MS datasets. To demonstrate PrimeNovo’s ability to generalize effectively across a wide spectrum of distinct MS/MS data for diverse downstream tasks, we conducted an evaluation of PrimeNovo’s performance on some of the most widely used publicly available MS/MS datasets. We then compared the results with those of the current state-of-the-art models, Casanovo and Casanovo V2. In addition to the nine-species benchmark dataset discussed earlier, we selected three prominent MS/MS datasets that represent varying data sources and application settings: the PT , IgG1-Human-HC , and HCC datasets, and the details of these datasets are included in the Supplementary Information. We start by evaluating PrimeNovo’s ability to perform well in a zero-shot scenario, which means the model is tested without any specific adjustments to match the characteristics and distribution of the target dataset. As depicted in Fig. a and Supplementary Fig. , PrimeNovo exhibits significant performance superiority over both Casanovo V2 and Casanovo in terms of peptide recall when directly tested on three distinct datasets. Specifically, PrimeNovo outperforms Casanovo V2 by 13%, 14%, and 22% on PT, IgG1-Human-HC, and HCC datasets, respectively. This performance gap widens to 30%, 43%, and 38% when compared to Casanovo. For the IgG1-Human-HC dataset, following , we present the evaluation results for each human antigen type, as illustrated in Fig. b. PrimeNovo consistently outperforms Casanovo V2 across all six antigen types, achieving increased peptide recall ranging from 9% to 20%. We further examine the amino acid level accuracy on the unseen dataset. From Fig. c, it’s notable that PrimeNovo has a dominant AA level precision advantage over Casanovo V2 across all confidence levels of the model output. This indicates PrimeNovo’s better prediction of amino acids’ presence and locations. To further assess the performance disparities under the zero-shot setting, we leveraged identified PSMs from MaxQuant in each dataset as the benchmark. Then we compared the number of overlapping PSMs between the predicted PSMs generated by each de novo algorithm and the PSMs identified by MaxQuant. As displayed in Fig. d, Casanovo performed poorly on the HCC dataset, with only 8 PSMs overlapping with MaxQuant. In contrast, Casanovo V2 identified 9050 overlapping PSMs, while PrimeNovo predicted up to 22499 PSMs that perfectly matched those identified by MaxQuant. On the PT dataset, PrimeNovo, Casanovo V2, and Casanovo had 34747, 26591, and 16814 overlapping PSMs with MaxQuant search results, respectively. PrimeNovo demonstrates a much more consistent prediction behavior, aligning closely with high-quality traditional database-searching peptide identification software. Next, we examine how well PrimeNovo generalizes under the fine-tuning setting, which involves quickly adapting the model to new training data from the target distribution without starting the training process from scratch. This approach allows the model to leverage its previously acquired knowledge from the large dataset it was originally trained on and apply it to a more specific task or domain with only a minimal amount of additional training. We fine-tuned PrimeNovo on both the PT and HCC training datasets to assess the model’s adaptability. In order to gauge the impact of the quantity of additional data on fine-tuning performance, we conducted the fine-tuning with 100, 1000, 10,000, and 100,000 additional data points, respectively. We also fine-tuned Casanovo V2 under identical settings to compare the adaptability of the two models fairly. As depicted on the right side of Fig. e, augmenting the amount of additional data for fine-tuning does indeed enhance the model’s prediction accuracy on the corresponding test set, as the model gains a better understanding of the distributional nuances within the data. In comparison, PrimeNovo demonstrates a more robust ability to adapt to new data distributions and achieves higher accuracy after fine-tuning compared to the zero-shot scenario. It consistently outperforms Casanovo V2 when subjected to the same fine-tuning conditions, with 18% and 12% higher peptide level recall on HCC and PT test sets respectively when the fine-tuning reaches the best performance (Fig. e). It is noteworthy that a noticeable improvement in prediction accuracy is only observed after incorporating 10,000 additional MS data points during the fine-tuning process, indicating a recommended data size for future fine-tuning endeavors involving other data distributions. It’s important to note that the fine-tuning process can lead the model to forget the original data distribution from the training set, which is referred to as catastrophic forgetting. As illustrated in the left part of Fig. e, when fine-tuning is conducted exclusively with the target data, the performance in the nine-species benchmark dataset experiences a significant and gradual decline as more data samples are included (indicated by the dashed line). However, when the target data is mixed with the original training data, catastrophic forgetting is mitigated, as evident from the dashed line in the right part of Fig. e. Indeed, fine-tuning exclusively with the target data does introduce a relatively higher performance gain in the target test set compared to fine-tuning with mixed data (solid line in Fig. e), where the difference can be as much as 15% when the amount of the new data used for fine-tuning is large. By fine-tuning the model using a single dataset and then testing it on others, we can explore the similarities and disparities in data distributions among different pairs of datasets. This approach provides valuable insights into how closely related each MS/MS dataset is to the others and the extent to which a model’s knowledge can be transferred when trained on one dataset. In Fig. g, it’s not surprising to observe that the model exhibits the strongest transferability when the training and testing data share the same data source. Notably, MassIVE-KB, the training set for both our model and Casanovo V2, demonstrates the highest average peptide recall of 65% across all other test sets. This can be attributed to the diverse range of MS/MS data sources encompassed within the MassIVE-KB dataset, covering a wide spectrum of distinct MS/MS data. The PT dataset, with an average peptide recall of 56%, is also considered a high-quality dataset with robust transferability. It has been employed in the training of numerous other de novo models . However, the models trained on the HCC and nine-species benchmark datasets do not generalize well to other testing datasets. The nine-species benchmark exclusively covers MS/MS data for the included nine species and has a relatively small data size, while the HCC dataset is specific to human hepatocellular carcinoma. Additionally, we observe that models trained with the nine-species benchmark dataset and MassIVE-KB datasets exhibit relatively poor performance when applied to the HCC dataset, suggesting a notable disparity in their data distribution. Finally, we conduct a comparative analysis between PrimeNovo and concurrent approaches in de novo sequencing to illustrate the advancements and effectiveness of our method. Our comparative models, namely GraphNovo, a graph-based neural network, and PepNet, a CNN-based neural network, approach the problem from distinct angles, utilizing the latest deep learning techniques. It’s worth noting that both GraphNovo and PepNet are trained on their own designated training and testing datasets for their respective model versions. Consequently, we adopt a zero-shot evaluation approach, testing PrimeNovo on each of their test sets and comparing the results with their reported performances. We carefully examined the used data and ensured that there was no overlap between our training dataset and the test sets used by GraphNovo and PepNet. For the 3-species test set employed by GraphNovo, PrimeNovo demonstrates remarkable improvements in peptide recall, surpassing GraphNovo by 13%, 13%, and 11% in the A. thaliana , C. elegans , and E. coli species, respectively (see Fig. f). Furthermore, when tested on the PepNet test set, PrimeNovo exhibits a notable advantage of 14% and 24% in peptide recall over PepNet when predicting the peptide with charges of 2 and 3 respectively, detailed results of which are in Supplementary Fig. . To gain a comprehensive understanding of the model’s behavior and to analyze how PrimeNovo utilizes the spectrum data to arrive at its final results, we employ some of the most recent model interpretability techniques, examining each component of our model in detail. We commence by visualizing the attention behavior of the encoder network in PrimeNovo and comparing it to that of Casanovo V2. The encoder’s role is critical, as it is responsible for feature extraction from the spectrum, significantly influencing how well the model utilizes input spectrum data. As depicted in the attention map in Fig. a, it is evident that Casanovo V2 predominantly assigns most of its attention weights to the first input position (the special token added at the beginning of the peak tokens). Attention weights for the remaining tokens are sparse, insignificant, and primarily concentrated along the diagonal direction. This behavior suggests that Casanovo V2 encodes information primarily within its special token, with limited utilization of other peak positions. In contrast, PrimeNovo exhibits a well-distributed attention pattern across different input peaks, each with varying levels of information density. Furthermore, we observe that the attention of PrimeNovo is more heavily allocated to peaks corresponding to the b-y ions of the true label, which are among the most crucial pieces of information for decoding the spectrum (as detailed in Supplementary Fig. ). This highlights PrimeNovo’s capacity to extract information more effectively from tokens it deems essential, and this behavior remains consistently active across all nine layers. Furthermore, we conducted a numerical comparison of the Value matrices learned by the encoder networks of both models . Each column in the Value matrix projection represents a hidden feature. To assess the diversity of features present in the Value matrix, we calculated the average cosine similarity between every pair of columns. As illustrated in the bar plot in Fig. a, it is evident that PrimeNovo’s feature vectors exhibit lower similarity to each other, as indicated by the lower average cosine similarity values in the plot. This suggests that our model’s Value matrix encompasses a broader spectrum of information and a more diverse set of features , . This finding could provide an additional explanation for our model’s superior performance. For a more comprehensive assessment of the orthogonality of the Value matrix projection, which is evaluated by measuring the norm of the Gram matrix – (see Supplementary Fig. ). Since our non-autoregressive model predicts the entire sequence at once, we can examine how each of the nine model layers progressively improves the overall sequence prediction. We decode the whole sequence from each layer of our model and observe how the amino acids evolve over time. As illustrated in Fig. c, amino acid-level accuracy experiences a significant surge from layer seven to nine, with a consistent increasing trend across each layer. This signifies a continual improvement in prediction accuracy at each layer. By examining the case study presented in Fig. b, we discern that this increase in accuracy is achieved through a layer-wise self-correction mechanism. In this process, each layer gradually adjusts the erroneously predicted amino acids throughout the entire sequence, making them more reasonable and closer to the true answer. The non-autoregressive model’s capability of enabling each amino acid to reference the surrounding amino acids for information facilitates accurate and effective correction across its layers. PMC, acting as the final safeguard against errors, rectifies model prediction errors by selecting the most probable sequence that adheres to the mass constraint. This process yields a slightly modified sequence compared to the output from the last layer, ultimately leading to the correct answer. We also employed the feature contribution technique saliency maps to analyze the impact of each peak on the prediction results. This technique generates contribution scores that provide a quick view of the impact of each peak on the prediction. A higher contribution score for a peak indicates a larger impact on the results. On the test set of PT, the contribution scores for all peaks in each spectrum. Subsequently are calculated, we sorted all peaks in descending order based on their contribution scores and selected the top 10 peaks. Using the known peptide sequences associated with these spectra, all possible fragment ions considering only 1+ and 2+ ions, are generated using the in-house script (see Supplementary Note for more details). We then compared the m / z values of the top 10 peaks with the m / z values of all possible fragment ions, considering a match if the difference was within 0.05 Da. Finally, the percentage of the top 10 peaks that could be matched is calculated. As shown in Fig. d, ~40% of the spectra had a matched percentage of above 50%. Importantly, our model not only focused on the major peaks but also considered internal fragment ions. For example (Fig. e), in the spectrum corresponding to the peptide sequence SLEDLIFESLPENASHKLEVR, among the top 10 peaks with the highest contribution scores, seven were b ions, while the remaining three corresponded to intermediate fragment ions FE (( b 8− c 6)+), LIFES (( b 9− b 4)+), and PEN (( x 11− x 8)+), respectively. These results demonstrate that our model learned a few informative peaks from the spectra, which are useful for peptide inference. To analyze which peak in the spectrum led to the erroneous generation of the model, we visualized the spectrum by highlighting b - y ion peaks corresponding to the model’s predictions. As shown in Fig. f, Casanovo V2’s predicted sequence predominantly aligns its y -ions with input spectrum peaks, with very few calculated b -ions aligning with input peaks. This behavior is a consequence of the autoregressive model’s prediction direction from right to left, making it more natural to choose y -ion peaks for forming predictions. However, given the presence of noise in the spectrum, this prediction approach can lead to errors when y -ions are inaccurately selected, as demonstrated in Fig. f. In contrast, PrimeNovo’s predictions exhibit an alignment with both b -ions and y -ions in the input spectrum. This is due to our model’s prediction process, which leverages information from both directions, allowing it to effectively utilize the peak information from both ends of the sequence. Furthermore, we conducted a detailed analysis to identify the specific peak responsible for prediction errors in the last layer. This is achieved by calculating a gradient-based contribution score for each input peak, serving as a robust indicator of which input has a greater impact on the output, determined by the magnitude of the gradient. As observed in the left corner of Fig. f, the highest contribution scores across the entire spectrum coincide precisely with the peak corresponding to PrimeNovo’s incorrectly predicted b -ion, and this critical information is captured and corrected by our PMC unit. We conducted an evaluation to gauge PrimeNovo’s proficiency in enhancing the identification of taxon-unique peptides, particularly in the context of metaproteomic research. The field of metaproteomics poses significant challenges when it comes to taxonomic annotation, primarily due to the vast diversity within microbiomes and the presence of closely related species that share high protein sequence similarity. Consequently, increasing the number of unique peptides represents a crucial approach for achieving precision in taxonomic annotations. In our assessment, we turned to a metaproteomic dataset obtained from gnotobiotic mice, hosting a consortium of 17 pre-defined bacterial strains (as summarized in Supplementary Table ). Within this dataset, we applied PrimeNovo and Casanovo V2 to sequence unidentified MS/MS spectra through database search, all without the need for fine-tuning . It’s worth noting that we are using Casanovo V2 without Beam Search (BS) due to the estimated inference time with BS exceeding 4000 A100 GPU hours on this large-scale dataset, which amounts to more than 21 days of inference with 8 A100 GPUs. As illustrated in Fig. a, PrimeNovo exhibits superior performance compared to Casanovo V2, identifying a significantly higher number of PSMs (8446 vs. 4072) and peptides (3157 vs. 1412) following the rigorous quality control process T, resulting in a relative increase of 107% and 124%, respectively. Furthermore, PrimeNovo excels in enhancing taxonomic resolution, outperforming Casanovo V2 in the detection of taxon-specific peptides. Notable increases are observed in bacterial-specific (1047 vs. 520), phylum-specific (828 vs. 399), genus-specific (511 vs. 241), and species-specific (215 vs. 92) peptides (Fig. b–d). Particularly noteworthy is the high identification accuracy achieved by PrimeNovo, where all identified peptides are correctly matched to known species, while Casanovo V2 exhibits one incorrect matching at the genus level (Fig. c). We further conducted an analysis of high-confidence identification results under the quality control process T. PrimeNovo demonstrated a significant increase in both PSM and peptide identifications, with a 66% increase (513,590 vs. 308,499) in PSMs and a 46% increase (58,392 vs. 39,866) in peptides. This result is further emphasized by the higher identifications of taxon-unique peptides achieved by PrimeNovo, surpassing Casanovo V2 in several categories, including bacterial-specific (36,704 vs. 24,349), phylum-specific (30,652 vs. 19,866), genus-specific (17,332 vs. 10,906), and species-specific (6848 vs. 4209) peptides (Fig. e). Subsequently, we assessed the models’ performance in taxonomic annotation at the protein level, which is crucial for enhancing the taxonomic resolution and contributing to subsequent research in the taxon-function network. As depicted in Supplementary Fig. , proteins identified by PrimeNovo and Casanovo were correctly assigned to 10 genera, 14 species, and 20 COG (Clusters of Orthologous Groups of proteins) categories. On the genus level, PrimeNovo identified a total of 6,883 proteins assigned to the 10 genera, with 6709 of them annotated to specific COG functions. In contrast, Casanovo V2 identified only 5028 proteins, with 4896 of them annotated. Thus, PrimeNovo achieved a 36.89% and 37.03% increase over Casanovo V2 in taxon and functional annotations. Furthermore, a detailed examination at the genus level revealed that PrimeNovo increased the number of proteins assigned to each genus compared to Casanovo V2: Bacteroides (4926 vs. 3623), Clostridium (3 vs. 2), Collinsella (486 vs. 383), Escherichia (91 vs. 62), Monoglobus (294 vs. 197), Odoribacter (297 vs. 204), Parabacteroides (576 vs. 425), Phocaeicola (204 vs 130), Ruminococcus (3 vs. 1), Ruthenibacterium (3 vs. 1). Similarly, PrimeNovo exhibited significant potential for taxonomic annotation at the species level. Compared to Casanovo V2, PrimeNovo identified an additional 45.32% (3136 vs. 2158) of proteins assigned to the 14 species, with 45.03% (3034 vs. 2092) of these proteins annotated to specific COG functions. These results demonstrate that PrimeNovo significantly enhances taxonomic resolution at both the peptide and protein levels, highlighting its substantial potential in metaproteomic research. PTMs play a crucial role in expanding the functional diversity of the proteome , going well beyond the inherent capabilities of the genetic code. The primary challenge lies in the underrepresentation of modified peptides within the dataset, especially those that have not been enriched for certain modifications. The detection of such peptides is often overshadowed by the more prevalent unmodified peptides. Moreover, the distinct physical properties of modified residues—namely their mass and ionization efficiency—further complicate the detection – . The capabilities of current database search engines are limited, permitting the consideration of only a select few modifications. This scarcity leads to a low presence of modified peptides in the training data, thereby making it difficult for models to accurately identify diverse PTMs from spectral data. To address these challenges, PrimeNovo has been advanced in predicting peptide sequences with multiple PTMs, establishing itself as a foundational model divergent from conventional methods that start anew for each PTM type. By fine-tuning enriched PTM data, PrimeNovo gains extensive exposure to multiple PTM types while retaining its ability to recognize standard peptides. Architectural adjustments, as illustrated in Fig. a, including the addition of a classification head above the encoder to identify specific PTMs and a newly initialized linear layer above the decoder, enhance PrimeNovo’s ability to decode peptides with PTMs, broadening the model’s token repertoire. The final loss is formulated in a multi-task setting, combining the peptide decoding loss with a binary classification task for PTM identification loss. Our training methodology employed a dataset encompassing 21 distinct PTMs, referred to as the 21PTMs dataset, as detailed in ref. . We fine-tuned PrimeNovo for each PTM to ascertain its proficiency in peptide generation and PTM classification, in accordance with previously described methods. To ensure dataset balance, we included an approximately equal number of peptides with and without PTMs, culminating in a total of 703,606 PSMs for the dataset. The comprehensive fine-tuning endeavor across the 21 PTMs allows PrimeNovo to discern a broad spectrum of PTMs, a capability evidenced by the exemplary performance metrics for each PTM category depicted in Fig. c. Specifically, the classification accuracies for all PTMs exceeded 95%, except asymmetric and symmetric Dimethylation at Arginine (R), and Monomethylation at Arginine (R), which have classification accuracies of 77%, 77%, and 69%, respectively. Excluding Monomethylation at Arginine (R), which recorded a peptide recall rate of 48%, the de novo sequencing recall for peptides with the other 20 PTMs exceeded 61%. Such peptide recall levels are on par with performance in other datasets without special PTMs, such as an average peptide recall of 64% across nine-species datasets. Detailed insights into the classification accuracy and peptide recall for each PTM are provided in the supplementary Fig. . To assess PrimeNovo’s inference performance on PTMs within a more applied context, we selected a phosphorylation dataset from Xu et al. (denote as the 2020-Cell-LUAD dataset), which concentrates on Human Lung Adenocarcinoma with 103 LUAD tumors and their corresponding non-cancerous adjacent tissues. It offers both phosphorylation-enriched and non-enriched data. We randomly selected a portion (3389 PSMs) of the enriched data for testing and the rest for training, checking of no overlapping peptide sequence between the training and testing sets. We fine-tuned PrimeNovo on such training data and the test results demonstrate that PrimeNovo distinguishes between phosphorylated and non-phosphorylated spectra with a classification accuracy of 98% and achieves a peptide recall rate of 66% on both cancer tissue data and non-cancerous adjacent tissues test data, as detailed in Supplementary Table . To assess PrimeNovo’s capability to identify modified peptides within non-enriched proteomic datasets, we deployed it for the analysis of unidentified MS/MS spectra from the non-enriched 2020-Cell-LUAD dataset, notably without conducting dataset-specific fine-tuning. Given the absence of peptide identifications from existing databases in this dataset, we relied on the model’s confidence scores to select 300 high-quality predicted peptides. We then undertook a comparative analysis between the theoretical spectrum, as generated by DeepPhosPho , and the original input spectrum corresponding to these peptides, as illustrated in Fig. b. Through this process, we pinpointed 12 peptides as candidates for synthesis validation and further functional investigation. The details of the selection methodology are elaborated upon in Supplementary Note . All 12 phosphopeptides predicted by PrimeNovo from non-enriched data were validated using their synthetic counterparts, as depicted in Fig. and Supplementary Figs. and . In Fig. d, e, they showcase the alignment between theoretical and experimental spectra for two representatives of 12 synthesized phosphorylated peptides. The comparison reveals a strong correspondence between the predicted b-ions and y-ions peaks and the experimental spectrum’s signal peaks, evidenced by a Pearson correlation exceeding 0.90 for nine paired spectra, and 0.70, 0.72, and 0.86 for the remaining three pairs. This correlation underscores the model’s high predictive precision. Further investigation into the proteins associated with these phosphopeptides highlighted their relevance to lung adenocarcinoma (LUAD). For example, the peptide LGpSGFSLTR (2+) (Fig. d) from Filamin-C (FLNC) aligns with findings that the ITPKA and Filamin C interaction fosters a dense F-actin network, enhancing LUAD cell migration . Another identified peptide, HGpSDPAFAPGPR (2+) from FAM83H (Fig. e), is noted for being upregulated in LUAD, indicating a potential prognostic marker of LUAD , . Additionally, peptides WLDEpSDAEMELR, GPAGEAGApSPPVR, and AQpTPPGPSLSGSK reveal proteins (HACD3, SNTB2, and SRRM2) not previously associated with LUAD, but there are studies suggesting potential relevance between these three proteins and other cancer types. This offers directions for potential biological research on the disease by examining the above-relevant proteins. For detailed results concerning the remaining peptides and the comprehensive experimental methodologies used for their synthesis and analysis, please see Supplementary Note . These results demonstrate that PrimeNovo has a high sensitivity in detecting PTMs from proteomic datasets, especially those non-enriched ones, which provides a solution for low-abundance PTM discovery. Peptide sequencing is vital for understanding protein structures and functions. This work introduces PrimeNovo, a Transformer-based model for fast, accurate de novo peptide sequencing. Using a non-autoregressive architecture and a precise PMC decoding unit, PrimeNovo achieves state-of-the-art performance across spectrum datasets. Its speed and adaptability make it ideal for large-scale sequencing, with robust performance in zero-shot and fine-tuning scenarios. PrimeNovo excels in metaproteomic peptide annotation, aiding microorganism identification and functional analysis, while its PTM detection capability after finetuning enables the discovery of peptides beyond traditional methods. Training datasets The dataset used for training our model is the MassIVE Knowledge Base spectral library version 1 (MassIVE-KB) , which we obtained from the MassIVE repository. This extensive dataset comprises over 2.1 million precursors originating from 19,610 proteins. These precursors were distilled from a vast pool of human data, amounting to more than 31 terabytes, gathered from 227 public proteomics datasets within the MassIVE repository. Overview and notation In the de novo sequencing task, we are provided with a spectrum instance denoted as S = { I , c , m }, which is generated by a mass spectrometer when analyzing biological samples. Here, I = {( m / z 1 , i 1 ), ( m / z 2 , i 2 ), ⋯ , ( m / z k , i k )} represents a set of mass-to-charge ratio and corresponding intensity pairs. These pairs are retained after being filtered by the mass spectrometer threshold. Additionally, c denotes the measured charge of the peptide (precursor), and m represents the measured total mass of this peptide. Our primary objective in this context is to derive the correct amino acid sequence denoted as A = { a 1 , a 2 , ⋯ , a n } from the information contained within S . Non-autoregressive transformer backbone We adopt the transformer encoder–decoder network as our foundational model, following the work of Casanovo . In the encoder network, we handle the mass-to-charge ratio m / z and the intensity information i from set I separately before merging them. To represent each m / z value, we employ a sinusoidal embedding function, which effectively captures the relative magnitude—an essential factor in determining the peptide fragments: [12pt]{minimal} $$g(m/z,\, j)=\{ (2 _{ }{(_{ }}{{ }_{ }})}^{2j/d}}),\,\,{}\,\,j \\ (2 _{ }{(_{ }}{{ }_{ }})}^{2j/d}}),\,\,{}\,\,j \, > \, .$$ g ( m / z , j ) = sin 2 π m / z ρ min ρ max ρ min 2 j / d , for j ≤ d 2 cos 2 π m / z ρ min ρ max ρ min 2 j / d , for j > d 2 Here, j signifies the position in the d -dimensional hidden embedding. The parameters [12pt]{minimal} $${ }_{ }$$ ρ max and [12pt]{minimal} $${ }_{ }$$ ρ min define the wavelength range for this embedding. In contrast, we handle intensity values through a linear projection layer. In the non-autoregressive model, the only architectural distinction between the encoder and decoder lies in the cross-attention mechanism. Therefore, we employ identical notations for both components. In a formal sense, each layer computes a representation R , based on the preceding feature embeddings. For the k th layer, the representation is 1 [12pt]{minimal} $${R}^{(k)}={{{{}}}\,{{{}}}}^{(k)}({R}^{(k-1)})$$ R ( k ) = Attention Layer ( k ) ( R ( k − 1 ) ) Here, R (0) signifies the spectrum embedding for the encoder, while for the decoder, it represents the summation of positional and precursor embeddings. To maintain consistency, we keep the generation length fixed as t for the decoder. Consequently, the output of the final decoder layer undergoes a softmax operation, which calculates the probability distribution over tokens for each position. Peptide reduction strategy for our non-autoregressive modeling Our strategy for non-autoregressive modeling deviates from conventional autoregressive generation, which predicts each token’s probability as P ( a ( i +1) ∣ a 1 ). This approach, however, restricts bidirectional information, contrasting with protein structures where each amino acid is informed by both neighbors. To address this, we propose a non-autoregressive model where all amino acids are generated simultaneously, allowing each position to access bidirectional context. In this framework, each amino acid probability, P ( a ), is independently modeled, but this independence can lead to weak global coherence, resulting in nonsensical sequences despite locally accurate regions. For instance, a phrase like “au revoir" might ambiguously split into “see bye" in non-autoregressive translation with cross-entropy loss due to a lack of sequence-level cohesion. To mitigate this, we employ CTC loss , which improves global consistency by enhancing sequence-level coherence, leading to more accurate and cohesive peptide generation. To address cases where the generated token sequence, with a maximum length t , exceeds the target length, we introduce a reduction function, Γ ( ⋅ ), in non-autoregressive generation. This function merges consecutive identical amino acids, for example: 2 [12pt]{minimal} $$ ({{{}}})={{{}}}$$ Γ ( AAGGGTYYYWWRWW ) = AGTYWRW However, simple reduction is unsuitable for sequences with consecutive identical amino acids. Inspired by Graves et al. , we use a blank token ϵ during generation. Identical amino acids separated by ϵ are not merged, and ϵ is later removed, resulting in 3 [12pt]{minimal} $$ ({{{}}} {{{}}} {{{}}} {{{}}})={{{}}}$$ Γ ( A ϵ ϵ AGG ϵ GTYYYWWRW ϵ ϵ ϵ ϵ W ) = AAGGTYWRWW For a visual representation of this process, please refer to the Supplementary Fig. . Definition of CTC loss Following the CTC reduction rule described above, it’s possible to obtain multiple decoding paths denoted as y , which can all be reduced to the target sequence A . For instance, both CCGT and CG ϵ T, among many others, can be transformed into the target sequence CGT. Consequently, the probability of generating the target sequence A is the sum of the probabilities associated with all paths y that can be reduced to A : 4 [12pt]{minimal} $$P(A| S)={}_{{{{}}}: ({{{}}})=A}P({{{}}}| S)={}_{{{{}}}: ({{{}}})=A}{}_{{{y}}_{i} {{{}}}} (P(\,{{y}}_{i}| S))$$ P ( A ∣ S ) = ∑ y : Γ ( y ) = A P ( y ∣ S ) = ∑ y : Γ ( y ) = A ∑ y i ∈ y log ( P ( y i ∣ S ) ) Here, y = ( y 1 , y 2 , ⋯ , y t ) represents a single decoding path in the non-autoregressive model output, satisfying the condition Γ ( y ) = A . The overall probability of generating the target sequence A , denoted as P ( A ∣ S ), is then computed as the sum of the probabilities of generating each y , with y i at each position. Since the probability is modeled independently, the probability of each y can be calculated as the multiplication of the probabilities of generating all y i ∈ y . This multiplication can be expressed as the sum of the logarithm of the probabilities of each y i . During the training process, our objective is to maximize the total probability of generating the target sequence A for each input spectrum S . Since we are utilizing gradient descent to optimize our model, this goal is equivalent to minimizing the negative total probability. Therefore, our loss function is simply defined as: 5 [12pt]{minimal} $${{{{}}}}_{{{{}}}}=-P(A| S)$$ L ctc = − P ( A ∣ S ) One could theoretically enumerate all possible paths y for each target sequence A in order to calculate the total probability (loss) for training our network. However, this approach becomes impractical as the number of paths grows exponentially with respect to the maximum generation length. This would result in an unmanageable amount of computation time. Instead, we adopt a dynamic programming method, as detailed in the Supplementary Information, to optimize the calculation of this loss efficiently. This approach allows us to train our model effectively without the computational burden of exhaustively enumerating all possible paths. Knapsack-like dynamic programming decoding algorithm for precise mass control The generated de novo peptide sequence should be strictly grounded by molecular mass measured by the mass spectrometer. Specifically, the molecular mass of the ground truth peptide, m tr falls in the range of [ m − σ , m + σ ], where m is precursor mass given by mass spectrometer, and σ is measurement error, usually at 10 −3 level, of used mass spectrometer. However, neural network models are of low explainability and controllability, making it difficult to control the generated results to cater to certain desires. To allow accurate generation, we reformulate the non-auto regressive generation as a knapsack-like optimization problem , where we are picking items (amino acids) to fill the bag with a certain weight constraint, while the value (predicted log probability) is maximized. Such optimization problem can be formulated as: 6 [12pt]{minimal} $$\,{{}}\,\,{}_{i=1}^{t} P({{y}}_{i}| S)\,\,\,\,{{}} \, {{}}\,\,\,{{{}}} {}_{ {a}_{j} ({{{}}})}w({a}_{j}) {{{}}},$$ maximize ∑ i = 1 t log P ( y i ∣ S ) constrained with L ≤ ∑ ∀ a j ∈ Γ ( y ) w ( a j ) ≤ U , where [12pt]{minimal} $${{{}}}$$ L and [12pt]{minimal} $${{{}}}$$ U are the desired lower bound and upper bound for decoded peptide mass. We denote [12pt]{minimal} $${{{}}}={m}-{ {tol}}$$ L = m − tol and [12pt]{minimal} $${{{}}}={m}+{ {tol}}$$ U = m + tol where tol is decoding tolerance within which we think the true mass m tr falls in, after taking into measurement error. Inspired by a similar idea by Liu et al. , we propose a dynamic programming method to solve such an optimization task. We denote e as the decoding precision to construct a two-dimensional DP table. For each time step, we would have [12pt]{minimal} $$ {{{}}}/e$$ ⌈ U / e ⌉ cells with being the ceiling function. The l th cell can only store the peptide with mass precisely within [ e *( l −1), e * l ]. Specifically, the l th cell at τ th time step d τ , l stores the most probable, calculated by the sum of log probability by non-autoregressive model, τ tokens sequence y 1: τ satisfying the mass constraint of [12pt]{minimal} $${ }_{ {a}_{j} ({{{{}}}}_{1: })}w({a}_{j}) [.e * (l-1),e * l).$$ ∑ ∀ a j ∈ Γ ( y 1 : τ ) w ( a j ) ∈ e * ( l − 1 ) , e * l . We first initialize our DP table by filling the first time step, τ = 1, as follows: 7 [12pt]{minimal} $${{{{}}}}^{1,l}=\{, &\,{{}}\,\,l=0 \\ _{ {a}_{j},\,\,{{}}\,,\,w({a}_{j}) [.e \, * \, (l-1),e \, * \, l).}\{{a}_{j}\}, &\,{{}}\,\, w({a}_{j}) [.e * (l-1),e * l).\\ {{}}, &\,{}\,. .$$ d 1 , l = ϵ , if l = 0 ⋃ ∀ a j , s.t. , w ( a j ) ∈ e * ( l − 1 ) , e * l { a j } , if ∃ w ( a j ) ∈ e * ( l − 1 ) , e * l ∅ , otherwise . In the first case, d 1,1 stores the one-token sequence with the total mass in the range of [0, e ], where e is usually a very small number ( e < 1) for higher decoding accuracy, therefore no amino acid other than ϵ can fall under this mass limit. On the other hand, when l ≠ 1, there might be multiple amino acids whose mass falls within [12pt]{minimal} $$[.e * (l-1),e * l).$$ e * ( l − 1 ) , e * l . We store all of them in l th cell to avoid overlooking of any possible starting amino acid. We then divide the recursion steps into three cases, [12pt]{minimal} $${{{{}}}}_{,l}^{(1)},{{{{}}}}_{,l}^{(2)}\,\,{}\,\,{{{{}}}}_{,l}^{(3)}$$ H τ , l ( 1 ) , H τ , l ( 2 ) and H τ , l ( 3 ) , each storing its corresponding set of sequences following the rules below: When y τ = ϵ , we know Γ( y 1: τ −1 ) = Γ( y 1: τ ) due to CTC reduction, therefore the mass stays the same. This gives the set of candidate sequences : 8 [12pt]{minimal} $${{{{}}}}_{,l}^{(1)}=\{{{{}}} \,| \, {{{}}} {{{{}}}}^{ -1,l}\}$$ H τ , l ( 1 ) = y ⊕ ϵ ∣ ∀ y ∈ d τ − 1 , l where  ⊕ is the concatenation. When the newly decoded non- ϵ token is the repetition of the last token, the reduced sequence still remains the same with the mass unchanged, due to the CTC rule. We get the second set of potential sequences: 9 [12pt]{minimal} $${{{{}}}}_{,l}^{(2)}=\{{{{}}} {{{{}}}}_{ -1}\,| \, {{{}}} {{{{}}}}^{ -1,l},\,{ {s.t.}},{{{{}}}}_{ -1} \}$$ H τ , l ( 2 ) = { y ⊕ y τ − 1 ∣ ∀ y ∈ d τ − 1 , l , s . t . , y τ − 1 ≠ ϵ } When the newly decoded non- ϵ token is different from the last token in the already generated sequence, the mass will be increased. We select the potential sequence by examining the total mass that falls in the mass constraint: 10 [12pt]{minimal} $${{{{}}}}_{,l}^{(3)}= \{{{{}}} {{{{}}}}_{ }\,| \, \,1 {l}_{0} < l, {{{}}} {{{{}}}}^{ -1,{l}_{0}},\\ {{y}}_{ } \, \, ,\,\,{{}}\,\,e * (l-1) {}_{ {a}_{j} ({{{}}} {{y}}_{ })}w({a}_{j}) \, < \, e * l \}$$ H τ , l ( 3 ) = y ⊕ y τ ∣ ∀ 1 ≤ l 0 < l , ∀ y ∈ d τ − 1 , l 0 , ∀ y τ ≠ ϵ , if e * ( l − 1 ) ≤ ∑ ∀ a j ∈ Γ ( y ⊕ y τ ) w ( a j ) < e * l The we update the cell d τ , l using all candidates from the above three sets: 11 [12pt]{minimal} $${{}}^{,l}={{{}}_{{}}}_{ {{}} {{}}_{,l}^{(1)} {{}}_{,l}^{(2)} {{}}_{,l}^{(3)}} ({ }_{{{{}}_{{}}} {{}}} P(y_j | S))$$ d τ , l = top B ∀ y ∈ H τ , l ( 1 ) ⋃ H τ , l ( 2 ) ⋃ H τ , l ( 3 ) ∑ y j ∈ y P ( y j ∣ S ) where top B is taking the top B most probable sequences according to generated probability. We then select the most probable sequence at d t ,∣ A ∣ cell as our final result. CUDA acceleration for proposed mass control decoding algorithm The time complexity of our proposed mass control dynamic programming algorithm, when executed sequentially, is O( N a t ( U / e ) 2 ), where N a represents the total number of tokens (which, in our case, corresponds to the number of amino acids plus one). To implement the parallel algorithm for the PMC unit, we employ the compute unified device architecture (CUDA). CUDA is a parallel computing programming framework developed by NVIDIA, which allows programs to leverage the computational power of NVIDIA graphics processing units (GPUs) for a wide range of general-purpose computing tasks. Detailed information regarding our CUDA algorithm is provided in the Supplementary Information. Reporting summary Further information on research design is available in the linked to this article. The dataset used for training our model is the MassIVE Knowledge Base spectral library version 1 (MassIVE-KB) , which we obtained from the MassIVE repository. This extensive dataset comprises over 2.1 million precursors originating from 19,610 proteins. These precursors were distilled from a vast pool of human data, amounting to more than 31 terabytes, gathered from 227 public proteomics datasets within the MassIVE repository. In the de novo sequencing task, we are provided with a spectrum instance denoted as S = { I , c , m }, which is generated by a mass spectrometer when analyzing biological samples. Here, I = {( m / z 1 , i 1 ), ( m / z 2 , i 2 ), ⋯ , ( m / z k , i k )} represents a set of mass-to-charge ratio and corresponding intensity pairs. These pairs are retained after being filtered by the mass spectrometer threshold. Additionally, c denotes the measured charge of the peptide (precursor), and m represents the measured total mass of this peptide. Our primary objective in this context is to derive the correct amino acid sequence denoted as A = { a 1 , a 2 , ⋯ , a n } from the information contained within S . We adopt the transformer encoder–decoder network as our foundational model, following the work of Casanovo . In the encoder network, we handle the mass-to-charge ratio m / z and the intensity information i from set I separately before merging them. To represent each m / z value, we employ a sinusoidal embedding function, which effectively captures the relative magnitude—an essential factor in determining the peptide fragments: [12pt]{minimal} $$g(m/z,\, j)=\{ (2 _{ }{(_{ }}{{ }_{ }})}^{2j/d}}),\,\,{}\,\,j \\ (2 _{ }{(_{ }}{{ }_{ }})}^{2j/d}}),\,\,{}\,\,j \, > \, .$$ g ( m / z , j ) = sin 2 π m / z ρ min ρ max ρ min 2 j / d , for j ≤ d 2 cos 2 π m / z ρ min ρ max ρ min 2 j / d , for j > d 2 Here, j signifies the position in the d -dimensional hidden embedding. The parameters [12pt]{minimal} $${ }_{ }$$ ρ max and [12pt]{minimal} $${ }_{ }$$ ρ min define the wavelength range for this embedding. In contrast, we handle intensity values through a linear projection layer. In the non-autoregressive model, the only architectural distinction between the encoder and decoder lies in the cross-attention mechanism. Therefore, we employ identical notations for both components. In a formal sense, each layer computes a representation R , based on the preceding feature embeddings. For the k th layer, the representation is 1 [12pt]{minimal} $${R}^{(k)}={{{{}}}\,{{{}}}}^{(k)}({R}^{(k-1)})$$ R ( k ) = Attention Layer ( k ) ( R ( k − 1 ) ) Here, R (0) signifies the spectrum embedding for the encoder, while for the decoder, it represents the summation of positional and precursor embeddings. To maintain consistency, we keep the generation length fixed as t for the decoder. Consequently, the output of the final decoder layer undergoes a softmax operation, which calculates the probability distribution over tokens for each position. Our strategy for non-autoregressive modeling deviates from conventional autoregressive generation, which predicts each token’s probability as P ( a ( i +1) ∣ a 1 ). This approach, however, restricts bidirectional information, contrasting with protein structures where each amino acid is informed by both neighbors. To address this, we propose a non-autoregressive model where all amino acids are generated simultaneously, allowing each position to access bidirectional context. In this framework, each amino acid probability, P ( a ), is independently modeled, but this independence can lead to weak global coherence, resulting in nonsensical sequences despite locally accurate regions. For instance, a phrase like “au revoir" might ambiguously split into “see bye" in non-autoregressive translation with cross-entropy loss due to a lack of sequence-level cohesion. To mitigate this, we employ CTC loss , which improves global consistency by enhancing sequence-level coherence, leading to more accurate and cohesive peptide generation. To address cases where the generated token sequence, with a maximum length t , exceeds the target length, we introduce a reduction function, Γ ( ⋅ ), in non-autoregressive generation. This function merges consecutive identical amino acids, for example: 2 [12pt]{minimal} $$ ({{{}}})={{{}}}$$ Γ ( AAGGGTYYYWWRWW ) = AGTYWRW However, simple reduction is unsuitable for sequences with consecutive identical amino acids. Inspired by Graves et al. , we use a blank token ϵ during generation. Identical amino acids separated by ϵ are not merged, and ϵ is later removed, resulting in 3 [12pt]{minimal} $$ ({{{}}} {{{}}} {{{}}} {{{}}})={{{}}}$$ Γ ( A ϵ ϵ AGG ϵ GTYYYWWRW ϵ ϵ ϵ ϵ W ) = AAGGTYWRWW For a visual representation of this process, please refer to the Supplementary Fig. . Following the CTC reduction rule described above, it’s possible to obtain multiple decoding paths denoted as y , which can all be reduced to the target sequence A . For instance, both CCGT and CG ϵ T, among many others, can be transformed into the target sequence CGT. Consequently, the probability of generating the target sequence A is the sum of the probabilities associated with all paths y that can be reduced to A : 4 [12pt]{minimal} $$P(A| S)={}_{{{{}}}: ({{{}}})=A}P({{{}}}| S)={}_{{{{}}}: ({{{}}})=A}{}_{{{y}}_{i} {{{}}}} (P(\,{{y}}_{i}| S))$$ P ( A ∣ S ) = ∑ y : Γ ( y ) = A P ( y ∣ S ) = ∑ y : Γ ( y ) = A ∑ y i ∈ y log ( P ( y i ∣ S ) ) Here, y = ( y 1 , y 2 , ⋯ , y t ) represents a single decoding path in the non-autoregressive model output, satisfying the condition Γ ( y ) = A . The overall probability of generating the target sequence A , denoted as P ( A ∣ S ), is then computed as the sum of the probabilities of generating each y , with y i at each position. Since the probability is modeled independently, the probability of each y can be calculated as the multiplication of the probabilities of generating all y i ∈ y . This multiplication can be expressed as the sum of the logarithm of the probabilities of each y i . During the training process, our objective is to maximize the total probability of generating the target sequence A for each input spectrum S . Since we are utilizing gradient descent to optimize our model, this goal is equivalent to minimizing the negative total probability. Therefore, our loss function is simply defined as: 5 [12pt]{minimal} $${{{{}}}}_{{{{}}}}=-P(A| S)$$ L ctc = − P ( A ∣ S ) One could theoretically enumerate all possible paths y for each target sequence A in order to calculate the total probability (loss) for training our network. However, this approach becomes impractical as the number of paths grows exponentially with respect to the maximum generation length. This would result in an unmanageable amount of computation time. Instead, we adopt a dynamic programming method, as detailed in the Supplementary Information, to optimize the calculation of this loss efficiently. This approach allows us to train our model effectively without the computational burden of exhaustively enumerating all possible paths. The generated de novo peptide sequence should be strictly grounded by molecular mass measured by the mass spectrometer. Specifically, the molecular mass of the ground truth peptide, m tr falls in the range of [ m − σ , m + σ ], where m is precursor mass given by mass spectrometer, and σ is measurement error, usually at 10 −3 level, of used mass spectrometer. However, neural network models are of low explainability and controllability, making it difficult to control the generated results to cater to certain desires. To allow accurate generation, we reformulate the non-auto regressive generation as a knapsack-like optimization problem , where we are picking items (amino acids) to fill the bag with a certain weight constraint, while the value (predicted log probability) is maximized. Such optimization problem can be formulated as: 6 [12pt]{minimal} $$\,{{}}\,\,{}_{i=1}^{t} P({{y}}_{i}| S)\,\,\,\,{{}} \, {{}}\,\,\,{{{}}} {}_{ {a}_{j} ({{{}}})}w({a}_{j}) {{{}}},$$ maximize ∑ i = 1 t log P ( y i ∣ S ) constrained with L ≤ ∑ ∀ a j ∈ Γ ( y ) w ( a j ) ≤ U , where [12pt]{minimal} $${{{}}}$$ L and [12pt]{minimal} $${{{}}}$$ U are the desired lower bound and upper bound for decoded peptide mass. We denote [12pt]{minimal} $${{{}}}={m}-{ {tol}}$$ L = m − tol and [12pt]{minimal} $${{{}}}={m}+{ {tol}}$$ U = m + tol where tol is decoding tolerance within which we think the true mass m tr falls in, after taking into measurement error. Inspired by a similar idea by Liu et al. , we propose a dynamic programming method to solve such an optimization task. We denote e as the decoding precision to construct a two-dimensional DP table. For each time step, we would have [12pt]{minimal} $$ {{{}}}/e$$ ⌈ U / e ⌉ cells with being the ceiling function. The l th cell can only store the peptide with mass precisely within [ e *( l −1), e * l ]. Specifically, the l th cell at τ th time step d τ , l stores the most probable, calculated by the sum of log probability by non-autoregressive model, τ tokens sequence y 1: τ satisfying the mass constraint of [12pt]{minimal} $${ }_{ {a}_{j} ({{{{}}}}_{1: })}w({a}_{j}) [.e * (l-1),e * l).$$ ∑ ∀ a j ∈ Γ ( y 1 : τ ) w ( a j ) ∈ e * ( l − 1 ) , e * l . We first initialize our DP table by filling the first time step, τ = 1, as follows: 7 [12pt]{minimal} $${{{{}}}}^{1,l}=\{, &\,{{}}\,\,l=0 \\ _{ {a}_{j},\,\,{{}}\,,\,w({a}_{j}) [.e \, * \, (l-1),e \, * \, l).}\{{a}_{j}\}, &\,{{}}\,\, w({a}_{j}) [.e * (l-1),e * l).\\ {{}}, &\,{}\,. .$$ d 1 , l = ϵ , if l = 0 ⋃ ∀ a j , s.t. , w ( a j ) ∈ e * ( l − 1 ) , e * l { a j } , if ∃ w ( a j ) ∈ e * ( l − 1 ) , e * l ∅ , otherwise . In the first case, d 1,1 stores the one-token sequence with the total mass in the range of [0, e ], where e is usually a very small number ( e < 1) for higher decoding accuracy, therefore no amino acid other than ϵ can fall under this mass limit. On the other hand, when l ≠ 1, there might be multiple amino acids whose mass falls within [12pt]{minimal} $$[.e * (l-1),e * l).$$ e * ( l − 1 ) , e * l . We store all of them in l th cell to avoid overlooking of any possible starting amino acid. We then divide the recursion steps into three cases, [12pt]{minimal} $${{{{}}}}_{,l}^{(1)},{{{{}}}}_{,l}^{(2)}\,\,{}\,\,{{{{}}}}_{,l}^{(3)}$$ H τ , l ( 1 ) , H τ , l ( 2 ) and H τ , l ( 3 ) , each storing its corresponding set of sequences following the rules below: When y τ = ϵ , we know Γ( y 1: τ −1 ) = Γ( y 1: τ ) due to CTC reduction, therefore the mass stays the same. This gives the set of candidate sequences : 8 [12pt]{minimal} $${{{{}}}}_{,l}^{(1)}=\{{{{}}} \,| \, {{{}}} {{{{}}}}^{ -1,l}\}$$ H τ , l ( 1 ) = y ⊕ ϵ ∣ ∀ y ∈ d τ − 1 , l where  ⊕ is the concatenation. When the newly decoded non- ϵ token is the repetition of the last token, the reduced sequence still remains the same with the mass unchanged, due to the CTC rule. We get the second set of potential sequences: 9 [12pt]{minimal} $${{{{}}}}_{,l}^{(2)}=\{{{{}}} {{{{}}}}_{ -1}\,| \, {{{}}} {{{{}}}}^{ -1,l},\,{ {s.t.}},{{{{}}}}_{ -1} \}$$ H τ , l ( 2 ) = { y ⊕ y τ − 1 ∣ ∀ y ∈ d τ − 1 , l , s . t . , y τ − 1 ≠ ϵ } When the newly decoded non- ϵ token is different from the last token in the already generated sequence, the mass will be increased. We select the potential sequence by examining the total mass that falls in the mass constraint: 10 [12pt]{minimal} $${{{{}}}}_{,l}^{(3)}= \{{{{}}} {{{{}}}}_{ }\,| \, \,1 {l}_{0} < l, {{{}}} {{{{}}}}^{ -1,{l}_{0}},\\ {{y}}_{ } \, \, ,\,\,{{}}\,\,e * (l-1) {}_{ {a}_{j} ({{{}}} {{y}}_{ })}w({a}_{j}) \, < \, e * l \}$$ H τ , l ( 3 ) = y ⊕ y τ ∣ ∀ 1 ≤ l 0 < l , ∀ y ∈ d τ − 1 , l 0 , ∀ y τ ≠ ϵ , if e * ( l − 1 ) ≤ ∑ ∀ a j ∈ Γ ( y ⊕ y τ ) w ( a j ) < e * l The we update the cell d τ , l using all candidates from the above three sets: 11 [12pt]{minimal} $${{}}^{,l}={{{}}_{{}}}_{ {{}} {{}}_{,l}^{(1)} {{}}_{,l}^{(2)} {{}}_{,l}^{(3)}} ({ }_{{{{}}_{{}}} {{}}} P(y_j | S))$$ d τ , l = top B ∀ y ∈ H τ , l ( 1 ) ⋃ H τ , l ( 2 ) ⋃ H τ , l ( 3 ) ∑ y j ∈ y P ( y j ∣ S ) where top B is taking the top B most probable sequences according to generated probability. We then select the most probable sequence at d t ,∣ A ∣ cell as our final result. The time complexity of our proposed mass control dynamic programming algorithm, when executed sequentially, is O( N a t ( U / e ) 2 ), where N a represents the total number of tokens (which, in our case, corresponds to the number of amino acids plus one). To implement the parallel algorithm for the PMC unit, we employ the compute unified device architecture (CUDA). CUDA is a parallel computing programming framework developed by NVIDIA, which allows programs to leverage the computational power of NVIDIA graphics processing units (GPUs) for a wide range of general-purpose computing tasks. Detailed information regarding our CUDA algorithm is provided in the Supplementary Information. Further information on research design is available in the linked to this article. Supplementary Information Reporting Summary Transparent Peer Review file Source Data
National Trends in the Prevalence of Unmet Health Care and Dental Care Needs During the COVID-19 Pandemic: Longitudinal Study in South Korea, 2009-2022
8f781f47-1ae8-4c22-afd9-a3f8fa8a57cf
11447424
Dentistry[mh]
Unmet health care and dental care needs significantly impact citizens’ quality of life and welfare. The SARS-CoV-2, which affects the respiratory system, has dramatically influenced the use of medical services, resulting in decreased hospital visits and hospitalization rates in many countries . Experts attribute this to factors such as anxiety, concerns about infection in hospitals, and executive orders to close hospitals, rather than a decrease in the actual number of patients . Unmet medical and oral care can exacerbate health conditions, making timely access to medical services crucial. Patients with chronic diseases, who require continuous care, face significant health risks when their medical needs are not met. Therefore, during the COVID-19 pandemic, it is important to identify trends and vulnerable groups experiencing delayed hospital visits despite needing treatment. Preliminary analysis suggests that socioeconomic status, geographical location, and preexisting health conditions may be significant risk factors for unmet health care and dental care needs, highlighting the complexity of the issue . Several studies have examined unmet health care needs, but many are short-term investigations or involve small sample sizes, focusing on groups such as older adults, the disabled, and individuals with specific diseases . Similarly, research on unmet dental needs has often been conducted over a relatively short period or within a specific group . Additionally, while experts have noted the decrease in hospital use after COVID-19, no studies have been found that compare the periods before and after the pandemic or analyze the associated risk factors . Moreover, only a few studies have investigated unmet health care and dental care needs. Therefore, it was necessary to examine the long-term trends of unmet health care and dental care needs and identify related risk factors for the entire population of Korea. Understanding vulnerable groups in the context of rapidly changing medical policies and restrictions after COVID-19 is essential for future predictions. Recent studies have highlighted various aspects of this issue. For example, research has discussed the heightened barriers to accessing health care services during the pandemic, particularly for low-income families . Additionally, unmet medical needs during the pandemic have significantly impacted mental health, with many individuals experiencing increased anxiety and depression due to delayed or foregone medical care . Furthermore, disruptions in dental care have been significant, with many patients unable to receive necessary treatments, underscoring the need for targeted interventions to address these gaps . Thus, our study aims to confirm the trends, relative risk factors, and impact of the COVID-19 pandemic on unmet health care and dental care needs using nationally representative data from the Korea Community Health Survey (KCHS) for the years 2009 to 2022 . Data This study used data from the KCHS for the years 2009 and 2011 through 2019, and 2021 through 2022 (excluding 2010 and 2020) . The KCHS is an annual initiative conducted by trained interviewers through household visits. This self-reporting survey targets approximately 900 adults aged 19 years and older at each of the 255 public health centers nationwide. It covers 18 domains and includes 163 health-related questions. In 2022, a comprehensive survey was conducted involving a total of 227,279 individuals. Out of the 163 survey questions, 18 focused on unmet health care and dental care needs. The analysis included factors such as age, sex, marital status, education, occupation, income status, residence, primary livelihood security recipient, smoking status, alcohol consumption, weekly walks, self-rated health, depression, BMI, and previous diagnoses of diabetes mellitus and hypertension. Data on unmet dental care needs for 2010 and 2020 were unavailable for this study. In total, 2,700,705 participants were included, with 1,229,671 (45.5%) being men . Ethics Approval The KCHS data were anonymized, and the study protocol received approval from the Institutional Review Board of the Korea Disease Control and Prevention Agency. All participants provided informed consent, and the study was conducted in accordance with the principles of the Declaration of Helsinki (approval numbers 2010-02CON-22-P, 2011-05CON04-C, 2012-07CON-01-2C, 2013-06EXP-01-3C, 2014-08EXP-09-​4CA, and 2016-10-01-TA). Analytic Framework The Andersen Health Care Utilization Model is a well-established theoretical framework that systematically analyzes both social and personal factors to explain determinants of health service utilization . According to the model, 3 dynamics—predisposing, enabling, and need variances—determine the consumption of health services, including outpatient and inpatient treatment. Predisposing factors are sociodemographic traits that increase an individual’s demand for health care, such as age, ethnicity, sex, and socioeconomic status. For example, individuals are more likely to seek care if they believe that using health services effectively addresses their illness. Enabling factors encompass community support, availability of health insurance, and family support. The demand for care reflects both one’s actual and perceived need for medical services. We referenced the Andersen Health Care Utilization Model to identify inequalities in access to health services and to determine how various factors contribute to the utilization of these services. Dependent Variable Unmet health care and dental care needs were defined as instances where individuals did not receive the medical or dental services deemed necessary by experts or desired by the patients . Subjective unmet health was analyzed based on the question “Have you ever needed health care (test or treatment) in this year but not received medical treatment?” and subjective unmet dental care was analyzed based on the question “Have you ever needed dental care (test or treatment) in this year but not received medical treatment?” . Independent Variable The survey period was treated as the independent variable and divided into 5 time segments to ensure estimation stability: 2009-2011 (excluding 2010), 2012-2014, 2015-2017, 2018-2019, and 2021-2022. The years 2021 and 2022 were designated as the COVID-19 pandemic period. Covariates The following covariates were considered as predisposing factors: age groups (19-39, 40-59, and ≥60 years), sex, marital status, education level (elementary school or lower, middle school, high school, and college or higher), residence (urban and rural) , and occupation (unemployed, blue-collar, and white-collar). Enabling/disabling factors included income status (unknown, <3 million KRW, 3-5 million KRW, and ≥5 million KRW per month; 1 KRW=US $0.00073) and being a basic livelihood security recipient. Need factors included smoking status, alcohol consumption (none, monthly, and weekly), weekly physical activity (rarely, 1-2, 3-4, and ≥5 times per week), subjective health status (bad, normal, and good), depression, previous diagnoses of diabetes mellitus or hypertension (yes or no), and BMI categories (underweight [<18.5 kg/m 2 ], normal [18.5-22.9 kg/m 2 ], overweight [23.0-24.9 kg/m 2 ], and obese [≥25.0 kg/m 2 ]) . Statistical Analyses We investigated overall population characteristics and calculated the weighted prevalence of unmet health care and dental care needs for each subgroup. To obtain national prevalence estimates, we performed a comprehensive sample analysis, accounting for stratification, clustering, and weighting. The methodology for calculating weights was carefully designed to reflect the complex survey design of the KCHS . Weights assigned to each respondent were used to adjust for differences in selection probabilities and to align with the age and sex distribution of the Korean population . The weighting process involved several steps, including the calculation of design weights, poststratification weights, and final weights . A weighted linear regression model was used to determine the trend of unmet health care and dental care needs in the prepandemic and pandemic periods, identifying differences in coefficients between these periods. The prevalence and trends of unmet health care and dental care needs were analyzed accordingly. Finally, prevalence ratios (PRs) were computed to evaluate the interaction term for each risk factor, separately for the prepandemic and pandemic periods . This analytical approach allows for interpreting which demographic or risk groups exhibited heightened vulnerability to unmet health care or dental care needs during both the prepandemic and pandemic periods. All statistics are presented as weighted percentages with 95% CIs . Statistical significance was defined as a 2-sided P value of <.05 . Statistical analyses were performed using SAS (version 9.4; SAS Institute). This study used data from the KCHS for the years 2009 and 2011 through 2019, and 2021 through 2022 (excluding 2010 and 2020) . The KCHS is an annual initiative conducted by trained interviewers through household visits. This self-reporting survey targets approximately 900 adults aged 19 years and older at each of the 255 public health centers nationwide. It covers 18 domains and includes 163 health-related questions. In 2022, a comprehensive survey was conducted involving a total of 227,279 individuals. Out of the 163 survey questions, 18 focused on unmet health care and dental care needs. The analysis included factors such as age, sex, marital status, education, occupation, income status, residence, primary livelihood security recipient, smoking status, alcohol consumption, weekly walks, self-rated health, depression, BMI, and previous diagnoses of diabetes mellitus and hypertension. Data on unmet dental care needs for 2010 and 2020 were unavailable for this study. In total, 2,700,705 participants were included, with 1,229,671 (45.5%) being men . The KCHS data were anonymized, and the study protocol received approval from the Institutional Review Board of the Korea Disease Control and Prevention Agency. All participants provided informed consent, and the study was conducted in accordance with the principles of the Declaration of Helsinki (approval numbers 2010-02CON-22-P, 2011-05CON04-C, 2012-07CON-01-2C, 2013-06EXP-01-3C, 2014-08EXP-09-​4CA, and 2016-10-01-TA). The Andersen Health Care Utilization Model is a well-established theoretical framework that systematically analyzes both social and personal factors to explain determinants of health service utilization . According to the model, 3 dynamics—predisposing, enabling, and need variances—determine the consumption of health services, including outpatient and inpatient treatment. Predisposing factors are sociodemographic traits that increase an individual’s demand for health care, such as age, ethnicity, sex, and socioeconomic status. For example, individuals are more likely to seek care if they believe that using health services effectively addresses their illness. Enabling factors encompass community support, availability of health insurance, and family support. The demand for care reflects both one’s actual and perceived need for medical services. We referenced the Andersen Health Care Utilization Model to identify inequalities in access to health services and to determine how various factors contribute to the utilization of these services. Unmet health care and dental care needs were defined as instances where individuals did not receive the medical or dental services deemed necessary by experts or desired by the patients . Subjective unmet health was analyzed based on the question “Have you ever needed health care (test or treatment) in this year but not received medical treatment?” and subjective unmet dental care was analyzed based on the question “Have you ever needed dental care (test or treatment) in this year but not received medical treatment?” . The survey period was treated as the independent variable and divided into 5 time segments to ensure estimation stability: 2009-2011 (excluding 2010), 2012-2014, 2015-2017, 2018-2019, and 2021-2022. The years 2021 and 2022 were designated as the COVID-19 pandemic period. The following covariates were considered as predisposing factors: age groups (19-39, 40-59, and ≥60 years), sex, marital status, education level (elementary school or lower, middle school, high school, and college or higher), residence (urban and rural) , and occupation (unemployed, blue-collar, and white-collar). Enabling/disabling factors included income status (unknown, <3 million KRW, 3-5 million KRW, and ≥5 million KRW per month; 1 KRW=US $0.00073) and being a basic livelihood security recipient. Need factors included smoking status, alcohol consumption (none, monthly, and weekly), weekly physical activity (rarely, 1-2, 3-4, and ≥5 times per week), subjective health status (bad, normal, and good), depression, previous diagnoses of diabetes mellitus or hypertension (yes or no), and BMI categories (underweight [<18.5 kg/m 2 ], normal [18.5-22.9 kg/m 2 ], overweight [23.0-24.9 kg/m 2 ], and obese [≥25.0 kg/m 2 ]) . We investigated overall population characteristics and calculated the weighted prevalence of unmet health care and dental care needs for each subgroup. To obtain national prevalence estimates, we performed a comprehensive sample analysis, accounting for stratification, clustering, and weighting. The methodology for calculating weights was carefully designed to reflect the complex survey design of the KCHS . Weights assigned to each respondent were used to adjust for differences in selection probabilities and to align with the age and sex distribution of the Korean population . The weighting process involved several steps, including the calculation of design weights, poststratification weights, and final weights . A weighted linear regression model was used to determine the trend of unmet health care and dental care needs in the prepandemic and pandemic periods, identifying differences in coefficients between these periods. The prevalence and trends of unmet health care and dental care needs were analyzed accordingly. Finally, prevalence ratios (PRs) were computed to evaluate the interaction term for each risk factor, separately for the prepandemic and pandemic periods . This analytical approach allows for interpreting which demographic or risk groups exhibited heightened vulnerability to unmet health care or dental care needs during both the prepandemic and pandemic periods. All statistics are presented as weighted percentages with 95% CIs . Statistical significance was defined as a 2-sided P value of <.05 . Statistical analyses were performed using SAS (version 9.4; SAS Institute). After excluding outliers, we analyzed data from 2,700,705 respondents with complete survey responses. The sample was distributed across different survey periods, as shown in : 455,909 participants from 2009 to 2011 (excluding 2010); 679,865 participants from 2012 to 2014; 678,560 participants from 2015 to 2017; 435,091 participants from 2018 to 2019; 224,001 participants in 2021; and 227,279 participants in 2022. summarizes the weighted distributions of each dependent variable for participants. Complex sample weights were used to obtain more accurate population estimates. and show the weighted prevalence and 95% CIs of unmet health care needs for each demographic characteristic, along with the trends before and during the pandemic. The β difference from the linear regression model, comparing the prepandemic and pandemic periods, along with its 95% CI, indicates the impact of COVID-19 on these trends. Overall, the prevalence of unmet health care needs decreased from 14.06% (64,108/455,909) in 2009-2011 to 4.82% (10,790/224,001) in 2021, with a slight increase to 5.28% (11,990/227,279) in 2022. While most subgroups exhibited similar patterns, the unemployed subgroup within the occupation category continued to experience a decline (β diff 0.15, 95% CI 0.14-0.16) during the same period. and display the weighted prevalence and 95% CIs of unmet dental care needs. We observed a higher prevalence of unmet dental care needs compared with health care needs. Although the failure rate to meet dental care needs generally declined from 24.46% (111,526/455,909) in 2009-2011 to 14.02% (31,864/227,279) in 2022, the β difference during the pandemic increased compared with prepandemic values (β diff 0.23, 95% CI 0.22-0.24). presents the risk factors for unmet health care needs before and during the pandemic. Women experienced unmet health care needs more frequently than men both before (PR 1.04, 95% CI 1.03-1.04) and during the pandemic (PR 1.02, 95% CI 1.01-1.02). Similarly, basic livelihood security recipients were 1.13 times more likely to experience unmet health care needs before the pandemic (95% CI 1.12-1.15) and 1.05 times more likely during the pandemic (95% CI 1.04-1.07) compared with nonrecipients. Lastly, presents the risk factors for unmet dental care needs before and during the pandemic. Women experienced unmet dental care needs more frequently than men both before (PR 1.04, 95% CI 1.03 to 1.04) and during the pandemic (PR 1.02, 95% CI 1.01-1.03). Similarly, basic livelihood security recipients had a higher PR for unmet dental care needs compared with nonrecipients, with a PR of 1.24 before the pandemic (95% CI 1.21-1.26) and 1.16 during the pandemic (95% CI 1.13-1.19). Principal Findings Our study represents the first national analysis of unmet health care and dental care needs in South Korea using long-term, large-scale data from the KCHS. We assessed the prevalence of unmet health care and dental care needs from 2009 to 2022, focusing on trends before and during the COVID-19 pandemic. Notably, the overall prevalence of unmet needs showed a continuous decline over time. Interestingly, although the attrition rate for unmet health care needs decreased significantly during the COVID-19 pandemic, unmet dental care needs remained markedly higher than unmet medical needs. Therefore, it is essential to implement more targeted measures to address and prevent these gaps in dental care. Comparisons With Previous Studies Numerous studies have investigated unmet health care and dental care needs during the COVID-19 pandemic . These studies explored trends and related factors among Korean adults, such as low income , urban residence , unmarried status, and being female. Additionally, other research on the relationship between unmet medical needs and the COVID-19 pandemic focused on specific populations, including older adults , patients with chronic diseases, those with diabetes mellitus or poor glycemic control , community-dwelling older adults , and young adults . Several studies also examined unmet medical needs in the general adult population in Austria (N=2000) , Canada (N=23,972) , and the United States (N=1,483,378) . These studies found that older adults, inactive individuals, and retirees were particularly susceptible to experiencing unmet medical needs . Conversely, other research studies indicated that younger individuals are more vulnerable to unmet medical care needs . Identified risk factors also include having a chronic disease, being an immigrant , and being female . However, most of these studies were limited to short-term analyses, small or specific target populations, and data from either before or during the pandemic. By contrast, our study offers comprehensive insights into risk factors and trends for unmet health care and dental care needs over the years 2009 to 2019 (before the pandemic) and 2021 to 2022 (during the pandemic) using a substantial data set of 2.7 million individuals. Plausible Mechanism The prevalence of unmet health care and dental care needs may have decreased due to the economic relief provided by the medical insurance system, increased accessibility, and reduced waiting times resulting from the growth in the number of hospitals and clinics. However, the slower rate of decline in unmet needs could also be partially attributed to patients’ reluctance to visit hospitals or clinics due to concerns about potential infection . Many experts emphasized that the decrease in hospital visits was not due to a reduction in the number of patients, but rather to anxiety, concerns about infection, and executive orders to close hospitals . Significant demographic and social variables have been identified in individuals with a high prevalence of unmet health care needs. Notably, these individuals are often in their 40s, predominantly female, unmarried, with lower income brackets, and minimal levels of education. Blue-collar workers, in particular, are more likely to experience injuries during labor and see a more rapid decline in physical ability as they age. Female blue-collar workers, in particular, reportedly have higher rates of chronic diseases, nontreatment, and poor health behaviors . Additionally, the high unmet health care needs among those with low income, especially basic livelihood security recipients, are largely attributed to the economic burden associated with medical services. Individuals with a low level of education often experience a high rate of unmet health care needs, which is associated with lower income, poor health conditions, and high smoking rates. Similarly, people aged 40-59 years with high working hours and those living in rural areas with relatively low access to medical care are significantly influenced by time burdens, which appear to contribute to unmet health care needs. The relationship between lifestyle choices and unmet health care needs is evident. Smokers, individuals who rarely exercise, those with poor subjective health, and those who are underweight often experience unmet needs, despite receiving medical services, due to their higher use of health care. Additionally, poor mental health and low self-esteem, including depression, are known to adversely affect unmet medical needs . Unmet health care needs are particularly high among unmarried individuals and women, who are more vulnerable to mental health problems . While the risk factors for unmet dental needs are similar to those for unmet medical needs, unmet dental care needs are notably higher among patients with diabetes mellitus. This may be due to the particularly adverse effects of diabetes on oral health, leading to greater dental care needs . In summary, unmet health care needs were found to be high in groups with significant medical demands and economic, time, and psychological burdens. The β diff analysis revealed that, after the COVID-19 pandemic, individuals who are older, unemployed, have low income, poor subjective health, low weight, high blood pressure, diabetes, urban residents, and those who smoke or consume alcohol were particularly vulnerable. Groups with high health care demands, such as those with poor subjective health, low weight, high blood pressure, and diabetes, as well as those with weak social or economic foundations (older adults, unemployed, and low income), were more affected by COVID-19. Urban areas also experienced greater changes due to higher population densities and stricter infectious disease regulations . The decrease in dental care needs during the pandemic is attributed to COVID-19’s respiratory transmission, which led to reluctance to visit dentists due to the need to remove masks. Limitations This study has several limitations. First, unmet medical care needs were defined based on patients’ subjective judgments, which might lead to discrepancies with actual needs. Second, values such as health level and BMI were self-reported, although self-reported BMI is generally reliable. Third, data from 2010 and 2020 were excluded due to the absence of questions regarding unmet health care needs, which may affect trend analyses for the COVID-19 pandemic period. Finally, the study focused solely on Korean adults, which may limit the generalizability of the findings to global trends. Conclusions This long-term, representative, population-based study shows that while the prevalence of unmet health care needs has generally declined, there was a noticeable decrease during the COVID-19 pandemic. Addressing this issue requires implementing more detailed measures to prevent the emergence of unmet medical and dental care needs. Our study represents the first national analysis of unmet health care and dental care needs in South Korea using long-term, large-scale data from the KCHS. We assessed the prevalence of unmet health care and dental care needs from 2009 to 2022, focusing on trends before and during the COVID-19 pandemic. Notably, the overall prevalence of unmet needs showed a continuous decline over time. Interestingly, although the attrition rate for unmet health care needs decreased significantly during the COVID-19 pandemic, unmet dental care needs remained markedly higher than unmet medical needs. Therefore, it is essential to implement more targeted measures to address and prevent these gaps in dental care. Numerous studies have investigated unmet health care and dental care needs during the COVID-19 pandemic . These studies explored trends and related factors among Korean adults, such as low income , urban residence , unmarried status, and being female. Additionally, other research on the relationship between unmet medical needs and the COVID-19 pandemic focused on specific populations, including older adults , patients with chronic diseases, those with diabetes mellitus or poor glycemic control , community-dwelling older adults , and young adults . Several studies also examined unmet medical needs in the general adult population in Austria (N=2000) , Canada (N=23,972) , and the United States (N=1,483,378) . These studies found that older adults, inactive individuals, and retirees were particularly susceptible to experiencing unmet medical needs . Conversely, other research studies indicated that younger individuals are more vulnerable to unmet medical care needs . Identified risk factors also include having a chronic disease, being an immigrant , and being female . However, most of these studies were limited to short-term analyses, small or specific target populations, and data from either before or during the pandemic. By contrast, our study offers comprehensive insights into risk factors and trends for unmet health care and dental care needs over the years 2009 to 2019 (before the pandemic) and 2021 to 2022 (during the pandemic) using a substantial data set of 2.7 million individuals. The prevalence of unmet health care and dental care needs may have decreased due to the economic relief provided by the medical insurance system, increased accessibility, and reduced waiting times resulting from the growth in the number of hospitals and clinics. However, the slower rate of decline in unmet needs could also be partially attributed to patients’ reluctance to visit hospitals or clinics due to concerns about potential infection . Many experts emphasized that the decrease in hospital visits was not due to a reduction in the number of patients, but rather to anxiety, concerns about infection, and executive orders to close hospitals . Significant demographic and social variables have been identified in individuals with a high prevalence of unmet health care needs. Notably, these individuals are often in their 40s, predominantly female, unmarried, with lower income brackets, and minimal levels of education. Blue-collar workers, in particular, are more likely to experience injuries during labor and see a more rapid decline in physical ability as they age. Female blue-collar workers, in particular, reportedly have higher rates of chronic diseases, nontreatment, and poor health behaviors . Additionally, the high unmet health care needs among those with low income, especially basic livelihood security recipients, are largely attributed to the economic burden associated with medical services. Individuals with a low level of education often experience a high rate of unmet health care needs, which is associated with lower income, poor health conditions, and high smoking rates. Similarly, people aged 40-59 years with high working hours and those living in rural areas with relatively low access to medical care are significantly influenced by time burdens, which appear to contribute to unmet health care needs. The relationship between lifestyle choices and unmet health care needs is evident. Smokers, individuals who rarely exercise, those with poor subjective health, and those who are underweight often experience unmet needs, despite receiving medical services, due to their higher use of health care. Additionally, poor mental health and low self-esteem, including depression, are known to adversely affect unmet medical needs . Unmet health care needs are particularly high among unmarried individuals and women, who are more vulnerable to mental health problems . While the risk factors for unmet dental needs are similar to those for unmet medical needs, unmet dental care needs are notably higher among patients with diabetes mellitus. This may be due to the particularly adverse effects of diabetes on oral health, leading to greater dental care needs . In summary, unmet health care needs were found to be high in groups with significant medical demands and economic, time, and psychological burdens. The β diff analysis revealed that, after the COVID-19 pandemic, individuals who are older, unemployed, have low income, poor subjective health, low weight, high blood pressure, diabetes, urban residents, and those who smoke or consume alcohol were particularly vulnerable. Groups with high health care demands, such as those with poor subjective health, low weight, high blood pressure, and diabetes, as well as those with weak social or economic foundations (older adults, unemployed, and low income), were more affected by COVID-19. Urban areas also experienced greater changes due to higher population densities and stricter infectious disease regulations . The decrease in dental care needs during the pandemic is attributed to COVID-19’s respiratory transmission, which led to reluctance to visit dentists due to the need to remove masks. This study has several limitations. First, unmet medical care needs were defined based on patients’ subjective judgments, which might lead to discrepancies with actual needs. Second, values such as health level and BMI were self-reported, although self-reported BMI is generally reliable. Third, data from 2010 and 2020 were excluded due to the absence of questions regarding unmet health care needs, which may affect trend analyses for the COVID-19 pandemic period. Finally, the study focused solely on Korean adults, which may limit the generalizability of the findings to global trends. This long-term, representative, population-based study shows that while the prevalence of unmet health care needs has generally declined, there was a noticeable decrease during the COVID-19 pandemic. Addressing this issue requires implementing more detailed measures to prevent the emergence of unmet medical and dental care needs.
Registrars’ experience with research in family medicine training programmes in South Africa
92fa55f9-5069-4224-a887-b5d8613cc153
11079345
Family Medicine[mh]
Primary health care should be the cornerstone of healthcare systems. , In recent years, governments have re-committed themselves to primary health care and the need for primary care research. , , According to the World Health Organization’s 2008 World Health Report on primary healthcare, ‘we need a health system that responds better and faster to a changing world’. Primary health care research is needed for such a response, to inform evidence-based practice, to improve health systems and policies, and to strengthen service delivery. Parallel to this, there has been a global decline in physicians involved with research since the 1990s, which in turn, has intensified the focus on research in education programmes. Primary health care is widely viewed as important, but in many countries it is still poorly developed with a lack of underlying evidence and research. Primary health care is unique in that it has the potential to improve the health status of populations and enable the health system to be more responsive to community needs, as well as more resilient, efficient and equitable. A wide variety of research is possible, including basic research, clinical research, research on health services and health systems, as well as education and training of the workforce. Globally, education in research skills is recognised as a fundamental aspect of residency training for clinicians. In sub-Saharan Africa, the need to train family physicians has been emphasised, including their ability to perform research. Family physicians in Africa can make an important contribution to district health systems, and adequate training in all their roles, including research, is vital. , Family physicians in South Africa need to fulfil six roles: as care providers, consultants, capacity builders, clinical trainers, clinical governance leaders and champions of community orientated primary care. All of these roles require insight into evidence-based medicine and appraisal of research for the healthcare team. The research toolkit is also of value in clinical governance to evaluate and improve the quality of care as part of service delivery. In the Western Cape, family physicians have also created a practice-based research network to conduct relevant applied research projects. A number of strategies have been identified to help build research capacity in primary health care settings in developing countries. These strategies include the development of training programmes with a focus on research skills (including mentorship programmes), motivating medical schools to establish family medicine departments that can build research capacity, encouraging primary care clinicians to partake in research activities, and also incorporate research into patient care and service delivery, and to establish partnerships with international organisations that support research. Echoing this, the Health Professions Council of South Africa (HPCSA) issued a directive in 2011, making a research component a prerequisite for registering as a specialist. However, this prerequisite now serves as an obstacle to graduating new family physicians and limits the output of the training programmes, as many programmes lack supervisory capacity and established researchers. A recent analysis of family physicians in South Africa concluded that a three-fold increase in throughput is needed to fulfil the minimal needs of the health system for family physicians. Improving the completion of the research assignment will help to achieve this goal. Literature on the experience of registrars with conducting research is scarce, and most evidence is from high-income countries. Factors identified as barriers to research in the primary care setting include: time constraints, lack of research skills, lack of research training or curriculum, lack of adequate supervision, and balancing clinical duties with research. , , , , There is evidence that training is needed not only in undertaking research, but also in understanding and using research in daily practice. Key factors that influence the success of registrars in performing research include: availability of mentorship, adequately qualified supervisors, training programmes that focus on orientation and preparation for research, developing skills in evidence-based practice (even at an undergraduate level), opportunities to publish and present research, partnerships with health services and policy makers to facilitate support for research, adequate scientific software and financial resources. , , In South Africa, a recent study with surgical registrars identified a few factors that impede the research process. These included a lack of funding to perform research, a lack of dedicated time to complete the research and the burden of clinical responsibilities. No such study has been performed with family medicine registrars and key questions remain – Is there adequate teaching, support and supervision? How does undergraduate training influence capability as a registrar? What factors influence the registrars’ success in their research assignment? How can programmes improve the registrars’ capacity to complete research assignments on time? The aim of this research was to explore the experience of registrars with their research assignments in postgraduate family medicine training programmes across South Africa. Specific objectives were to explore the registrars’ prior learning of research skills, their experience of formal teaching of research, the challenges of balancing clinical responsibilities with doing research, the registrar-supervisor relationship, and the influence of the institutional environment on performing research. Study design The study was designed as an explorative descriptive qualitative study sectional, qualitative study in order to understand the experience of registrars with their research assignments. Setting Registrars in family medicine are trained through nine universities in South Africa: the University of Cape Town, the University of Witwatersrand, Stellenbosch University, the University of KwaZulu-Natal, the University of Pretoria, Sefako Makgatho University, the University of the Free State, the University of Limpopo and Walter Sisulu University. Each university has a number of accredited training complexes in both rural and urban settings. Registrars are mostly trained in primary health care and district hospitals, but can also rotate to larger regional and tertiary hospitals. Training programmes are for 4 years and registrars attempt to take the Fellowship of the College of Family Physicians (FCFP) exit examination in their fourth year. At the time of this study, Part A of the Fellowship was a clinical examination and Part B required successful completion of the research assignment. If a registrar had not completed their research assignment, they could not obtain the Fellowship or register as a family physician with the HPCSA. Completion of the research assignment is therefore a rate-limiting step in the national pipeline of new family physicians. A recent decision by the College of Family Physicians will remove the research requirement from the Fellowship examination sometime in the future, but this will remain a requirement for the HPCSA. Each training programme has its own approach to teaching research and supporting novice researchers. Universities may start the process at different stages of the programme and have varying levels of supervisory capacity. Universities may also differ in the way research is approved and ethical issues are considered. They may also differ in the specific requirements of the final research assignment, as some may require a journal article format and others a formal dissertation. The experience of registrars and success in completing their research may, therefore, vary considerably between programmes. The researcher was a final year registrar in family medicine and was working in a rural town as part of the training programme at Stellenbosch University. The researcher acted as interviewer, and interviews took place on a virtual platform. The researcher had not performed qualitative research previously and was supervised. In performing this research study, she had to experience many of the same challenges that were described by the participants and realised that this was a difficult experience for many of her colleagues. She had no prior experience of registrars from other training programmes. She did not know registrars from the other university programmes. She was aware of the need to remain open to all perspectives in the interviews and analysis, and to be aware of her own viewpoints. Study population The study population included newly graduated family physicians (having passed FCFP exams within the past year) and registrars still in their final year (year 4) of study. Approximately 20 new family physicians pass the exam each year, and there are around 55 registrars nationally per year in the programmes. There were no exclusion criteria. Sample size The intended sample size was 18 registrars, with 2 registrars from each university. Concurrent data analysis determined the final sample size. Initial analysis was conducted after the first nine interviews (one per programme), and if thematic saturation was not achieved, then five more interviews were performed, again followed by data analysis. The final four interviews were only performed if saturation was still not achieved, with the possibility of additional interviews if necessary. Sampling strategy Extreme case purposeful sampling was used to identify one registrar who completed research in a timely way (defined as completing research in time to take the Fellowship Part B in year 4 of the programme) and one registrar who needed extension to complete research (defined as not completing in time to pass Part B of the Fellowship during their fourth year of the programme). Participants were selected with the help of their representative on the South African Academy of Family Physician’s (SAAFP) Education and Training Committee. Registrars have formed a virtual community in South Africa, and their representative on the SAAFP is in touch with registrars from all the programmes. The representative on the SAAFP was contacted to help identify suitable candidates for the study; an informative message was sent to the proposed candidates via the representative, only once the candidate agreed to partake in the study was their information shared with the interviewer. Data collection The researcher developed an interview guide with potential open-ended questions on each topic . These topics covered all five objectives of the study and were informed by the available literature. , , The guide was revised with feedback from the supervisor, was piloted, reviewed and subsequently approved. The researcher contacted participants in advance and sent a consent form together with an information sheet pertaining to the research. Semi-structured interviews took place at a mutually agreeable time, in English, via a virtual platform (Zoom), over 30–60 min. All interviews were audio-recorded for later transcription and analysis. Data analysis A professional transcriber created verbatim transcripts, and the researcher checked them for accuracy against the audio tapes. The researcher performed the analysis with supervision of key steps. Concurrent analysis followed the steps of the framework method with the assistance of ATLAS.ti software. The framework method was developed for applied policy research and has been used extensively by family medicine in qualitative studies within South Africa. It provides a clear and structured approach to qualitative data analysis: Familiarisation: the researcher reviewed the recorded interviews and transcriptions, and inductively identified issues that could be coded. Coding index: codes were defined from the inductive issues that were identified from the data in step 1. They were then organised into categories. Coding: codes were applied to each transcript. Charting: codes were grouped into families within ATLAS.ti according to the categories developed in step 2. Reports were created in ATLAS.ti that brought all the data together within a code family. Interpretation: each report was interpreted to derive themes and subthemes. The range of opinions and experiences in each theme were described as well as any relationships between themes. The supervisor assisted with checking the coding index and the interpretation as well as assisting with using ATLAS.ti software to perform the analysis. Reflexivity formed part of this process, and the author’s own reactions and thoughts to the data were documented and discussed with the supervisor. Ethical considerations Ethics approval was obtained from the Health Research Ethics Committee 1 of Stellenbosch University (reference number S21/07/114). The study was conducted according to the ethical guidelines and principles of the International Declaration of Helsinki, South African Guidelines for Good Clinical Practise and the Medical Research Council Ethical Guidelines for Research. The national Education and Training Committee of the SAAFP gave permission to conduct the research. The study was designed as an explorative descriptive qualitative study sectional, qualitative study in order to understand the experience of registrars with their research assignments. Registrars in family medicine are trained through nine universities in South Africa: the University of Cape Town, the University of Witwatersrand, Stellenbosch University, the University of KwaZulu-Natal, the University of Pretoria, Sefako Makgatho University, the University of the Free State, the University of Limpopo and Walter Sisulu University. Each university has a number of accredited training complexes in both rural and urban settings. Registrars are mostly trained in primary health care and district hospitals, but can also rotate to larger regional and tertiary hospitals. Training programmes are for 4 years and registrars attempt to take the Fellowship of the College of Family Physicians (FCFP) exit examination in their fourth year. At the time of this study, Part A of the Fellowship was a clinical examination and Part B required successful completion of the research assignment. If a registrar had not completed their research assignment, they could not obtain the Fellowship or register as a family physician with the HPCSA. Completion of the research assignment is therefore a rate-limiting step in the national pipeline of new family physicians. A recent decision by the College of Family Physicians will remove the research requirement from the Fellowship examination sometime in the future, but this will remain a requirement for the HPCSA. Each training programme has its own approach to teaching research and supporting novice researchers. Universities may start the process at different stages of the programme and have varying levels of supervisory capacity. Universities may also differ in the way research is approved and ethical issues are considered. They may also differ in the specific requirements of the final research assignment, as some may require a journal article format and others a formal dissertation. The experience of registrars and success in completing their research may, therefore, vary considerably between programmes. The researcher was a final year registrar in family medicine and was working in a rural town as part of the training programme at Stellenbosch University. The researcher acted as interviewer, and interviews took place on a virtual platform. The researcher had not performed qualitative research previously and was supervised. In performing this research study, she had to experience many of the same challenges that were described by the participants and realised that this was a difficult experience for many of her colleagues. She had no prior experience of registrars from other training programmes. She did not know registrars from the other university programmes. She was aware of the need to remain open to all perspectives in the interviews and analysis, and to be aware of her own viewpoints. The study population included newly graduated family physicians (having passed FCFP exams within the past year) and registrars still in their final year (year 4) of study. Approximately 20 new family physicians pass the exam each year, and there are around 55 registrars nationally per year in the programmes. There were no exclusion criteria. The intended sample size was 18 registrars, with 2 registrars from each university. Concurrent data analysis determined the final sample size. Initial analysis was conducted after the first nine interviews (one per programme), and if thematic saturation was not achieved, then five more interviews were performed, again followed by data analysis. The final four interviews were only performed if saturation was still not achieved, with the possibility of additional interviews if necessary. Extreme case purposeful sampling was used to identify one registrar who completed research in a timely way (defined as completing research in time to take the Fellowship Part B in year 4 of the programme) and one registrar who needed extension to complete research (defined as not completing in time to pass Part B of the Fellowship during their fourth year of the programme). Participants were selected with the help of their representative on the South African Academy of Family Physician’s (SAAFP) Education and Training Committee. Registrars have formed a virtual community in South Africa, and their representative on the SAAFP is in touch with registrars from all the programmes. The representative on the SAAFP was contacted to help identify suitable candidates for the study; an informative message was sent to the proposed candidates via the representative, only once the candidate agreed to partake in the study was their information shared with the interviewer. The researcher developed an interview guide with potential open-ended questions on each topic . These topics covered all five objectives of the study and were informed by the available literature. , , The guide was revised with feedback from the supervisor, was piloted, reviewed and subsequently approved. The researcher contacted participants in advance and sent a consent form together with an information sheet pertaining to the research. Semi-structured interviews took place at a mutually agreeable time, in English, via a virtual platform (Zoom), over 30–60 min. All interviews were audio-recorded for later transcription and analysis. A professional transcriber created verbatim transcripts, and the researcher checked them for accuracy against the audio tapes. The researcher performed the analysis with supervision of key steps. Concurrent analysis followed the steps of the framework method with the assistance of ATLAS.ti software. The framework method was developed for applied policy research and has been used extensively by family medicine in qualitative studies within South Africa. It provides a clear and structured approach to qualitative data analysis: Familiarisation: the researcher reviewed the recorded interviews and transcriptions, and inductively identified issues that could be coded. Coding index: codes were defined from the inductive issues that were identified from the data in step 1. They were then organised into categories. Coding: codes were applied to each transcript. Charting: codes were grouped into families within ATLAS.ti according to the categories developed in step 2. Reports were created in ATLAS.ti that brought all the data together within a code family. Interpretation: each report was interpreted to derive themes and subthemes. The range of opinions and experiences in each theme were described as well as any relationships between themes. The supervisor assisted with checking the coding index and the interpretation as well as assisting with using ATLAS.ti software to perform the analysis. Reflexivity formed part of this process, and the author’s own reactions and thoughts to the data were documented and discussed with the supervisor. Ethics approval was obtained from the Health Research Ethics Committee 1 of Stellenbosch University (reference number S21/07/114). The study was conducted according to the ethical guidelines and principles of the International Declaration of Helsinki, South African Guidelines for Good Clinical Practise and the Medical Research Council Ethical Guidelines for Research. The national Education and Training Committee of the SAAFP gave permission to conduct the research. describes the characteristics of the participants. The different training programmes are identified by a letter, to show the distribution across programmes, but not the actual programmes. Thematic analysis identified seven themes with subthemes as shown in . Prior exposure to research There was little undergraduate exposure to performing research. Most of the registrars had never been involved with research or exposed to the research process. Unlike the registrars’ clinical skills, it was important that teachers did not assume prior learning of research methods or processes. Registrars were mostly complete novices in the field of research: ‘So I would say for someone who knows as little as we do at our first or second year of our Masters training, I would say, [I] mean I think people forget that we really do not know anything about research.’ (37-year-old male, newly qualified family physician, completed the research on time) Participants reported that this limited experience with research contributed to feeling overwhelmed by the requirements of the research assignment. Few participants reported being involved with aspects of research prior to starting the programme. For example, taking part in clinical audits, writing articles or helping to collect data for someone else’s study, but none of these experiences fully prepared them for the research assignment. However, those few participants who were involved with prior research, reported an improved experience with performing the research assignment. Formal teaching and training of research competencies A formal teaching module on research typically took place in the first or second year of training. One participant reported that there was no formal teaching and that all teaching was informal by family physicians at the training sites. Formal modules were quite different in content, and might focus on teaching research methods, critical appraisal skills, or writing the research proposal. The educational approaches also differed, from didactic teaching to round table discussions or peer review groups. The feedback from respondents suggested that feedback on their draft proposals and research assignments as well as interactive discussions were of more value than just learning theory about research: ‘We spend the first year having qualitative meetings where [we] would present a proposal or something like that, and people would ask questions. Fine-tuning so that by the time you get going you got quite a mature idea of what you were wanting. Rather than running ahead with something that you hadn’t quite whittled down to what could be achievable.’ (36-year-old male, newly qualified family physician, completed the research on time) Outside of the departments of family medicine, universities and other organisations offered optional additional training courses on research skills. Many participants did not consider such additional training, but registrars who did, reported an improved skill set. Topics ranged from performing and analysing qualitative research to courses focussed on biostatistics. Complementary to formal teaching of research, the participants also noted that journal clubs and contact sessions contributed to their overall learning. Journal clubs were valuable as they exposed registrars to research studies, helped them appraise and interpret research, and then applied their new knowledge to their own research assignments. Although journal clubs or contact sessions were used by participants to help them complete their proposals and data collection, they could not replace formal teaching of research. The institutional processes involved with the research process Participants experienced the process of obtaining ethics approval as a time-consuming barrier to completing the research assignment, which caused significant frustration. On the other hand, many participants were aware of this barrier, and most were not surprised at the time between submission to ethics and receiving feedback. Likewise, many registrars required provincial permission to perform their research within the health services, and all agreed that this too was a time-consuming process: ‘I had a big delay when I submitted my proposal to ethics, I think had about a four- or five-months delay to get it back. And so that was frustrating for me.’ (36-year-old male, newly qualified family physician, completed the research on time) Library services played an important role in completion of the research proposal. Participants often consulted dedicated library personnel to assist with searching for and finding articles relevant to their research subject. Feelings of gratitude and relief were common. The use of scholarly support services included biostatistics, transcription, copyediting and translation services. Participants reported that the value of statistical support was dependent on the statistician involved. Transcription and translation services were not readily available and on occasion had to be outsourced. The experience of copyediting services was overall positive and contributed to a more professional presentation of the research assignment. The financial burden of performing a research assignment was well managed by the registrars. Typical costs included ethics review, travel, cell phone data and transcription services. Some participants were awarded funding through their universities or from other organisations, but most studies were self-funded. In the case of self-funded studies, the participants managed to complete the studies without financial support even though this did on occasion cause a delay in the process: ‘I had to look in private if I can get such a person. And they don’t come cheap. Yes. And the research is self-funded, so that was the other delay.’ (36-year-old female, registrar year 4, still completing the research) The possibility of publishing the research added value to the investment of time and energy in the research assignment. Some registrars were motivated by the possibility of publishing their work and opportunities to engage with colleagues who shared the same interests at conferences and academic days. The format of the final assignment differed across training programmes from full dissertation to publication-ready articles. It was evident that publication-ready formats added extra motivation to complete the research. Although not all participants had experienced the examination process, those who did reported it being a fair process with positive feedback. The experience of performing the research The feasibility of the research project was a common theme. While the research assignment is needed to fulfil the requirements for the Master’s degree, it should also be an appropriately sized task. Various factors enabled participants to choose an achievable goal, such as supervisor feedback and peer review in formal feedback sessions. This feedback enabled the registrars to downscale if needed and simplify their projects to ensure a more feasible study was attempted: ‘And then another factor that I think was really important is that it is your MMed and not your PhD. You need to finish it and it needs to be achievable. It doesn’t [need] to be life altering research, but it does need to be a good experience in terms of getting some research out. And even if it is something that you can publish. It needs to be a good first experience. To make sure that you are able to put limits on what you are researching.’ (36-year-old male, newly qualified family physician, completed on time) A lack of specific deadlines could contribute to procrastination and delay in the completion of the research assignment. On the other hand, one participant reported that the pressure of a deadline to choose a topic caused him to choose a topic that he was not particularly passionate about, leading to poor motivation to complete the project. Data collection in the health system could be challenging. For example, the incomplete nature of clinical note keeping and disorganisation of the filing system complicated accurate and complete data collection, and delayed the process. Registrars also reported specialist departments agreeing to assist with participant identification and data collection, but then not following through. One registrar even reported that they were required to take up additional managerial responsibilities to act as the interim clinical manager in a rural hospital, which delayed the completion of the research assignment. A research assistant was mentioned as a helpful resource. Finding a suitable assistant could be difficult, especially when there were language requirements. The use of an assistant could be a methodological requirement in order to reduce bias or overcome language barriers and could also be necessary to make progress while still fulfilling one’s clinical responsibilities. However, research assistants also needed to be paid, trained and supervised: ‘…it can be quite hard to actually get a research assistant, for example, if you do need to acquire the data during normal working hours, and don’t want to impact your service delivery, etcetera. To try and actually get a reliable research assistant and to get funding for such a person can be quite a challenge.’ (32-year-old male, newly qualified family physician, completed the research on time) Nevertheless, respondents all agreed that the experience of performing their own research was an informative and learning experience with many new skills and competencies acquired through the process. External and unpredictable factors influencing the research process According to respondents, the COVID-19 pandemic influenced the completion of their research projects in various ways. For some participants, the pandemic created new hurdles; for example, some participants had to conceptualise new research projects when their original projects were no longer feasible with COVID-19 restrictions: ‘But, you know, those are the sorts of unforeseen things that happened with research, predicting global pandemics isn’t exactly easy when designing your questionnaires etcetera.’ (32-year-old male, newly qualified family physician, completed the research on time) The COVID-19 pandemic also saw respondents taking up new roles such as sub-district COVID-19 coordinators, increasing their clinical responsibilities and limiting time for the research even further. For others, the pandemic created new opportunities for research. For example, studies involving the homeless population, who were now housed in shelters during hard lockdown, and were easily accessible as a study population. Furthermore, unforeseen incidents such as riots and social unrest hampered data collection for other participants. The registrar–supervisor relationship Programmes differed in how they appointed supervisors. Some registrars mentioned being assigned a supervisor with no input from their side and found this to be an authoritarian approach by the respective department or faculty. On occasion, the appointment was made based on ease of access, as the supervising family physician worked at the same facility. On some occasions, it was a collaborative decision, based on the interests and expertise of the available supervisors, in conjunction with the topics presented by the registrars. The registrar’s personal choice was also considered in some instances. Ease of access and responsiveness was highly valued by the registrars: ‘Supervisors are your family physicians who are quite readily available and who you work with quite often so you can always contact with them as well.’ (43-year-old male, completed registrar time, still completing the research) Furthermore, if the registrar’s personal choice and expertise of the supervisor were considered, it led to greater confidence in the supervisor and improved the overall experience with the research process. Some of the common complaints mentioned by the participants related to the accessibility and responsiveness of the supervisors. Most reported that supervisors were easily accessible, via email or telephone, sometimes in person, but that the responsiveness to aspects of the research was inconsistent. Some supervisors were very responsive towards questions, while others took longer than the agreed amount of time (varying in weeks up to months) to respond to queries, and others did not respond at all: ‘So I felt so alone because you write something, send Prof, he’ll take like months to respond to you, and remember your programme is not research alone.’ (39-year-old female, completed registrar time, still completing the research) Several potential factors were mentioned as contributing to the responsiveness of the supervisor. Other research responsibilities, for example, being supervisor to many students, could limit responsiveness. Other work responsibilities, such as clinical responsibilities in the case of a family physician or managerial responsibilities as head of department, might also impede supervision: ‘I guess we need to understand that also, they are not just there for research and my supervisor in particular, I think he had six or seven research people that he was helping.’ (43-year-old male, completed registrar time, still completing the research) Some participants reported that the feedback from supervisors was not constructive and did not contribute to the value of the final assignment or help them learn new skills: ‘The person would not try and help with small details or help to try and find where I’m struggling to see if they can help with that.’ (37-year-old male, newly qualified family physician, completed the research on time) Other respondents experienced the complete opposite in terms of communication with their supervisor and the support offered to them during this assignment. They reported having support throughout the process; for example, the supervisor being always available and giving continuous feedback as the project developed. This improved the registrar’s ability to continue with the project in a timely way. Such supervisors gave special attention to guiding the registrar through the research process, for example, investing time and energy to ensure the development of new skills and competencies: ‘…she actually links me to also some of her colleagues who [are] experienced in certain things that if she’s not sure about you know, she’ll refer me to them and then we will engage with them in our conversations in how to approach the research.’ (31-year-old female, registrar time completed, still completing the research) In cases where the supervisor gave constructive feedback and offered ongoing guidance, the respondents felt more supported and had an improved experience. Constructive feedback also enabled the respondents to learn and practice new skills, such as adopting an academic writing style early in the write up of the assignment. The registrar–supervisor relationship was described as a collaborative effort and an interdependent relationship, where the participant works on an element of the project, sends it to the supervisor and then needs to wait for feedback before being able to continue with the work on the assignment. Sometimes this iterative process could be too slow and disrupt momentum: ‘Then making changes, sending it back, then, you know, waiting another two, three weeks just to get minor changes back. So that, yes that’s why I said, it felt quite tedious at a stage, trying to make changes but only getting a reply on your changes a month later. And then you’re already kind of forgot and your focus has already shifted to something else.’ (32-year-old female, newly qualified family physician, completed the research on time) The expertise of a supervisor was another contributing factor to the overall experience of participants and the quality of the research assignment. Supervisors were perceived as more effective if they had prior experience of supervising students, and experience in performing quantitative or qualitative research: ‘And I can say that the quality of your research also depends on your supervisor’s experience. So the more experienced supervisor you have, the better experience you have in terms of doing the research because they are able to guide you better.’ (31-year-old female, registrar time completed, still completing the research) Some respondents alluded to the supervisor having, not only a supervisory and capacity-building role in the research assignment, but also developing a broader mentoring role for the registrar. Respondents reported that such supervisors could be a role model and mentor for the family medicine specialty or fulfilling the role of a ‘father figure’. Other participants reported that the supervisor did not fulfil the role of mentor and that a more in-depth relationship was needed to develop a mentor–mentee relationship. The exit of a supervisor, for example, by resigning or taking up another post, complicated the research process. This supervisor might become less accessible and responsive or a new supervisor might be needed. Balancing clinical responsibilities with personal factors and the research process Respondents reported that the family medicine training programme had many academic tasks and responsibilities. For example, assignments, presentations, quality improvement projects (QIP), self-studying and portfolio work (including observed consultations and observed procedures): ‘So, family medicine has a lot of coursework, a lot of assignments, a lot of mini research projects. I mean, the QIP itself is a research project. So, there’s a lot of content in terms of how much you have to learn, and how much you have to do. And then on top of that, the research is also quite demanding, it’s an additional thing.’ (31-year-old female, registrar time completed, still completing the research) This research project was often happening simultaneously and then neglected in favour of the immediate academic tasks. Registrars found it challenging to balance all their responsibilities related to the specialisation, because of the multifaceted nature of the programme as well as other responsibilities such as clinical work, overtime hours, and personal and family life. The urgent nature of clinical work led to it often taking preference over performing the research component: ‘I think the crux of the matter is that the research often feels like it takes a back foot to the rest of your, your responsibilities as a registrar.’ (32-year-old male, newly qualified family physician, completed the research on time) The reality of the health system is that registrars take up a clinical post, and therefore, if they take time out for research, clinical service delivery is directly influenced, and the clinical team will have to carry the consequences. This led to conflict between the clinical and academic responsibilities. The responsibility of performing research influences the registrar’s relationship with co-workers. Registrars often felt guilty for not pulling their weight in the clinical area when they had to dedicate time to performing research and were unable to fulfil their clinical duties. This contributed to strained relationships in the workplace: ‘So it does sort of create a [strained] relationship between registrars and other colleagues because people feel that you actually have time off when you actually don’t have time off, you actually have so much on your plate and people just feel like you’re just dodging and diving the whole time and you are never there, you know so it does create a very questionable relationship with colleagues.’ (31-year-old female, registrar time completed, still completing the research) However, when co-workers were actively involved in the research, such as assisting with identifying participants, the experience could change from strained to a supportive and collaborative relationship. Participants had varied experiences in obtaining protected time to perform their research. Some respondents reported that protected time was a theoretical concept and never realised in practice. Whereas other respondents used dedicated time (designated for different aspects of the programme such as journal clubs, contact session and special leave) to facilitate various components of the research process in order to not have it compete with clinical responsibilities. The shared expectation among participants was that dedicated time should be a priority and was a necessity for successful and timely completion of the research assignment: ‘I feel like something that must be set out from the very beginning for registrars is, “if you need to complete an academic program for which your research it is such a big component of; the least you can get is time to do it during the day.” Because it is very difficult to complete something like this. It is such a major part of completing your degree.’ (32-year-old female, newly qualified family physician, completed research on time) Much of the research was completed at home or after hours, where it competed with commuted overtime as well as family and personal life. Personal factors influencing the research assignment included issues such as pregnancy and childcare, maternity leave as well as health issues. Registrars needed to cope with stress, stay motivated, handle the emotional burden of performing the research and hone their time management skills. It was easy to become overwhelmed, and this could also impact on completion of the registrar programme as a whole: ‘So it gets overwhelming, extremely overwhelming. It gets extremely tiring. And I think that’s one of the reasons why we lose so many registrars in the course people just don’t finish the course because it’s, it just gets overwhelming and people just don’t cope with the demand of, of the course you know, so, yes, so it was, it’s not easy. It’s not easy at all.’ (31-year-old female, registrar time completed, still completing the research) Completion of the research assignment depended greatly upon the motivation of the individual registrar. This motivation stemmed from self-driven work and learning. Participants agreed that motivation played a large part in getting the work done. Choosing a topic that maintained and inspired such motivation was a key factor. A respondent also reported that this motivation was linked to the registrar–supervisor relationship and that support and feedback from the supervisor could influence the motivation with which the registrar approached the research project. Participating in the research assignment had the participants divided in terms of their future likelihood of performing research. Some respondents would grasp at the opportunity to go through the process of doing research again. Other participants shared that this experience has demotivated them to perform research in the future, but given the opportunity to improve the circumstances surrounding the research process they would consider engaging with research again. There was little undergraduate exposure to performing research. Most of the registrars had never been involved with research or exposed to the research process. Unlike the registrars’ clinical skills, it was important that teachers did not assume prior learning of research methods or processes. Registrars were mostly complete novices in the field of research: ‘So I would say for someone who knows as little as we do at our first or second year of our Masters training, I would say, [I] mean I think people forget that we really do not know anything about research.’ (37-year-old male, newly qualified family physician, completed the research on time) Participants reported that this limited experience with research contributed to feeling overwhelmed by the requirements of the research assignment. Few participants reported being involved with aspects of research prior to starting the programme. For example, taking part in clinical audits, writing articles or helping to collect data for someone else’s study, but none of these experiences fully prepared them for the research assignment. However, those few participants who were involved with prior research, reported an improved experience with performing the research assignment. A formal teaching module on research typically took place in the first or second year of training. One participant reported that there was no formal teaching and that all teaching was informal by family physicians at the training sites. Formal modules were quite different in content, and might focus on teaching research methods, critical appraisal skills, or writing the research proposal. The educational approaches also differed, from didactic teaching to round table discussions or peer review groups. The feedback from respondents suggested that feedback on their draft proposals and research assignments as well as interactive discussions were of more value than just learning theory about research: ‘We spend the first year having qualitative meetings where [we] would present a proposal or something like that, and people would ask questions. Fine-tuning so that by the time you get going you got quite a mature idea of what you were wanting. Rather than running ahead with something that you hadn’t quite whittled down to what could be achievable.’ (36-year-old male, newly qualified family physician, completed the research on time) Outside of the departments of family medicine, universities and other organisations offered optional additional training courses on research skills. Many participants did not consider such additional training, but registrars who did, reported an improved skill set. Topics ranged from performing and analysing qualitative research to courses focussed on biostatistics. Complementary to formal teaching of research, the participants also noted that journal clubs and contact sessions contributed to their overall learning. Journal clubs were valuable as they exposed registrars to research studies, helped them appraise and interpret research, and then applied their new knowledge to their own research assignments. Although journal clubs or contact sessions were used by participants to help them complete their proposals and data collection, they could not replace formal teaching of research. Participants experienced the process of obtaining ethics approval as a time-consuming barrier to completing the research assignment, which caused significant frustration. On the other hand, many participants were aware of this barrier, and most were not surprised at the time between submission to ethics and receiving feedback. Likewise, many registrars required provincial permission to perform their research within the health services, and all agreed that this too was a time-consuming process: ‘I had a big delay when I submitted my proposal to ethics, I think had about a four- or five-months delay to get it back. And so that was frustrating for me.’ (36-year-old male, newly qualified family physician, completed the research on time) Library services played an important role in completion of the research proposal. Participants often consulted dedicated library personnel to assist with searching for and finding articles relevant to their research subject. Feelings of gratitude and relief were common. The use of scholarly support services included biostatistics, transcription, copyediting and translation services. Participants reported that the value of statistical support was dependent on the statistician involved. Transcription and translation services were not readily available and on occasion had to be outsourced. The experience of copyediting services was overall positive and contributed to a more professional presentation of the research assignment. The financial burden of performing a research assignment was well managed by the registrars. Typical costs included ethics review, travel, cell phone data and transcription services. Some participants were awarded funding through their universities or from other organisations, but most studies were self-funded. In the case of self-funded studies, the participants managed to complete the studies without financial support even though this did on occasion cause a delay in the process: ‘I had to look in private if I can get such a person. And they don’t come cheap. Yes. And the research is self-funded, so that was the other delay.’ (36-year-old female, registrar year 4, still completing the research) The possibility of publishing the research added value to the investment of time and energy in the research assignment. Some registrars were motivated by the possibility of publishing their work and opportunities to engage with colleagues who shared the same interests at conferences and academic days. The format of the final assignment differed across training programmes from full dissertation to publication-ready articles. It was evident that publication-ready formats added extra motivation to complete the research. Although not all participants had experienced the examination process, those who did reported it being a fair process with positive feedback. The feasibility of the research project was a common theme. While the research assignment is needed to fulfil the requirements for the Master’s degree, it should also be an appropriately sized task. Various factors enabled participants to choose an achievable goal, such as supervisor feedback and peer review in formal feedback sessions. This feedback enabled the registrars to downscale if needed and simplify their projects to ensure a more feasible study was attempted: ‘And then another factor that I think was really important is that it is your MMed and not your PhD. You need to finish it and it needs to be achievable. It doesn’t [need] to be life altering research, but it does need to be a good experience in terms of getting some research out. And even if it is something that you can publish. It needs to be a good first experience. To make sure that you are able to put limits on what you are researching.’ (36-year-old male, newly qualified family physician, completed on time) A lack of specific deadlines could contribute to procrastination and delay in the completion of the research assignment. On the other hand, one participant reported that the pressure of a deadline to choose a topic caused him to choose a topic that he was not particularly passionate about, leading to poor motivation to complete the project. Data collection in the health system could be challenging. For example, the incomplete nature of clinical note keeping and disorganisation of the filing system complicated accurate and complete data collection, and delayed the process. Registrars also reported specialist departments agreeing to assist with participant identification and data collection, but then not following through. One registrar even reported that they were required to take up additional managerial responsibilities to act as the interim clinical manager in a rural hospital, which delayed the completion of the research assignment. A research assistant was mentioned as a helpful resource. Finding a suitable assistant could be difficult, especially when there were language requirements. The use of an assistant could be a methodological requirement in order to reduce bias or overcome language barriers and could also be necessary to make progress while still fulfilling one’s clinical responsibilities. However, research assistants also needed to be paid, trained and supervised: ‘…it can be quite hard to actually get a research assistant, for example, if you do need to acquire the data during normal working hours, and don’t want to impact your service delivery, etcetera. To try and actually get a reliable research assistant and to get funding for such a person can be quite a challenge.’ (32-year-old male, newly qualified family physician, completed the research on time) Nevertheless, respondents all agreed that the experience of performing their own research was an informative and learning experience with many new skills and competencies acquired through the process. According to respondents, the COVID-19 pandemic influenced the completion of their research projects in various ways. For some participants, the pandemic created new hurdles; for example, some participants had to conceptualise new research projects when their original projects were no longer feasible with COVID-19 restrictions: ‘But, you know, those are the sorts of unforeseen things that happened with research, predicting global pandemics isn’t exactly easy when designing your questionnaires etcetera.’ (32-year-old male, newly qualified family physician, completed the research on time) The COVID-19 pandemic also saw respondents taking up new roles such as sub-district COVID-19 coordinators, increasing their clinical responsibilities and limiting time for the research even further. For others, the pandemic created new opportunities for research. For example, studies involving the homeless population, who were now housed in shelters during hard lockdown, and were easily accessible as a study population. Furthermore, unforeseen incidents such as riots and social unrest hampered data collection for other participants. Programmes differed in how they appointed supervisors. Some registrars mentioned being assigned a supervisor with no input from their side and found this to be an authoritarian approach by the respective department or faculty. On occasion, the appointment was made based on ease of access, as the supervising family physician worked at the same facility. On some occasions, it was a collaborative decision, based on the interests and expertise of the available supervisors, in conjunction with the topics presented by the registrars. The registrar’s personal choice was also considered in some instances. Ease of access and responsiveness was highly valued by the registrars: ‘Supervisors are your family physicians who are quite readily available and who you work with quite often so you can always contact with them as well.’ (43-year-old male, completed registrar time, still completing the research) Furthermore, if the registrar’s personal choice and expertise of the supervisor were considered, it led to greater confidence in the supervisor and improved the overall experience with the research process. Some of the common complaints mentioned by the participants related to the accessibility and responsiveness of the supervisors. Most reported that supervisors were easily accessible, via email or telephone, sometimes in person, but that the responsiveness to aspects of the research was inconsistent. Some supervisors were very responsive towards questions, while others took longer than the agreed amount of time (varying in weeks up to months) to respond to queries, and others did not respond at all: ‘So I felt so alone because you write something, send Prof, he’ll take like months to respond to you, and remember your programme is not research alone.’ (39-year-old female, completed registrar time, still completing the research) Several potential factors were mentioned as contributing to the responsiveness of the supervisor. Other research responsibilities, for example, being supervisor to many students, could limit responsiveness. Other work responsibilities, such as clinical responsibilities in the case of a family physician or managerial responsibilities as head of department, might also impede supervision: ‘I guess we need to understand that also, they are not just there for research and my supervisor in particular, I think he had six or seven research people that he was helping.’ (43-year-old male, completed registrar time, still completing the research) Some participants reported that the feedback from supervisors was not constructive and did not contribute to the value of the final assignment or help them learn new skills: ‘The person would not try and help with small details or help to try and find where I’m struggling to see if they can help with that.’ (37-year-old male, newly qualified family physician, completed the research on time) Other respondents experienced the complete opposite in terms of communication with their supervisor and the support offered to them during this assignment. They reported having support throughout the process; for example, the supervisor being always available and giving continuous feedback as the project developed. This improved the registrar’s ability to continue with the project in a timely way. Such supervisors gave special attention to guiding the registrar through the research process, for example, investing time and energy to ensure the development of new skills and competencies: ‘…she actually links me to also some of her colleagues who [are] experienced in certain things that if she’s not sure about you know, she’ll refer me to them and then we will engage with them in our conversations in how to approach the research.’ (31-year-old female, registrar time completed, still completing the research) In cases where the supervisor gave constructive feedback and offered ongoing guidance, the respondents felt more supported and had an improved experience. Constructive feedback also enabled the respondents to learn and practice new skills, such as adopting an academic writing style early in the write up of the assignment. The registrar–supervisor relationship was described as a collaborative effort and an interdependent relationship, where the participant works on an element of the project, sends it to the supervisor and then needs to wait for feedback before being able to continue with the work on the assignment. Sometimes this iterative process could be too slow and disrupt momentum: ‘Then making changes, sending it back, then, you know, waiting another two, three weeks just to get minor changes back. So that, yes that’s why I said, it felt quite tedious at a stage, trying to make changes but only getting a reply on your changes a month later. And then you’re already kind of forgot and your focus has already shifted to something else.’ (32-year-old female, newly qualified family physician, completed the research on time) The expertise of a supervisor was another contributing factor to the overall experience of participants and the quality of the research assignment. Supervisors were perceived as more effective if they had prior experience of supervising students, and experience in performing quantitative or qualitative research: ‘And I can say that the quality of your research also depends on your supervisor’s experience. So the more experienced supervisor you have, the better experience you have in terms of doing the research because they are able to guide you better.’ (31-year-old female, registrar time completed, still completing the research) Some respondents alluded to the supervisor having, not only a supervisory and capacity-building role in the research assignment, but also developing a broader mentoring role for the registrar. Respondents reported that such supervisors could be a role model and mentor for the family medicine specialty or fulfilling the role of a ‘father figure’. Other participants reported that the supervisor did not fulfil the role of mentor and that a more in-depth relationship was needed to develop a mentor–mentee relationship. The exit of a supervisor, for example, by resigning or taking up another post, complicated the research process. This supervisor might become less accessible and responsive or a new supervisor might be needed. Respondents reported that the family medicine training programme had many academic tasks and responsibilities. For example, assignments, presentations, quality improvement projects (QIP), self-studying and portfolio work (including observed consultations and observed procedures): ‘So, family medicine has a lot of coursework, a lot of assignments, a lot of mini research projects. I mean, the QIP itself is a research project. So, there’s a lot of content in terms of how much you have to learn, and how much you have to do. And then on top of that, the research is also quite demanding, it’s an additional thing.’ (31-year-old female, registrar time completed, still completing the research) This research project was often happening simultaneously and then neglected in favour of the immediate academic tasks. Registrars found it challenging to balance all their responsibilities related to the specialisation, because of the multifaceted nature of the programme as well as other responsibilities such as clinical work, overtime hours, and personal and family life. The urgent nature of clinical work led to it often taking preference over performing the research component: ‘I think the crux of the matter is that the research often feels like it takes a back foot to the rest of your, your responsibilities as a registrar.’ (32-year-old male, newly qualified family physician, completed the research on time) The reality of the health system is that registrars take up a clinical post, and therefore, if they take time out for research, clinical service delivery is directly influenced, and the clinical team will have to carry the consequences. This led to conflict between the clinical and academic responsibilities. The responsibility of performing research influences the registrar’s relationship with co-workers. Registrars often felt guilty for not pulling their weight in the clinical area when they had to dedicate time to performing research and were unable to fulfil their clinical duties. This contributed to strained relationships in the workplace: ‘So it does sort of create a [strained] relationship between registrars and other colleagues because people feel that you actually have time off when you actually don’t have time off, you actually have so much on your plate and people just feel like you’re just dodging and diving the whole time and you are never there, you know so it does create a very questionable relationship with colleagues.’ (31-year-old female, registrar time completed, still completing the research) However, when co-workers were actively involved in the research, such as assisting with identifying participants, the experience could change from strained to a supportive and collaborative relationship. Participants had varied experiences in obtaining protected time to perform their research. Some respondents reported that protected time was a theoretical concept and never realised in practice. Whereas other respondents used dedicated time (designated for different aspects of the programme such as journal clubs, contact session and special leave) to facilitate various components of the research process in order to not have it compete with clinical responsibilities. The shared expectation among participants was that dedicated time should be a priority and was a necessity for successful and timely completion of the research assignment: ‘I feel like something that must be set out from the very beginning for registrars is, “if you need to complete an academic program for which your research it is such a big component of; the least you can get is time to do it during the day.” Because it is very difficult to complete something like this. It is such a major part of completing your degree.’ (32-year-old female, newly qualified family physician, completed research on time) Much of the research was completed at home or after hours, where it competed with commuted overtime as well as family and personal life. Personal factors influencing the research assignment included issues such as pregnancy and childcare, maternity leave as well as health issues. Registrars needed to cope with stress, stay motivated, handle the emotional burden of performing the research and hone their time management skills. It was easy to become overwhelmed, and this could also impact on completion of the registrar programme as a whole: ‘So it gets overwhelming, extremely overwhelming. It gets extremely tiring. And I think that’s one of the reasons why we lose so many registrars in the course people just don’t finish the course because it’s, it just gets overwhelming and people just don’t cope with the demand of, of the course you know, so, yes, so it was, it’s not easy. It’s not easy at all.’ (31-year-old female, registrar time completed, still completing the research) Completion of the research assignment depended greatly upon the motivation of the individual registrar. This motivation stemmed from self-driven work and learning. Participants agreed that motivation played a large part in getting the work done. Choosing a topic that maintained and inspired such motivation was a key factor. A respondent also reported that this motivation was linked to the registrar–supervisor relationship and that support and feedback from the supervisor could influence the motivation with which the registrar approached the research project. Participating in the research assignment had the participants divided in terms of their future likelihood of performing research. Some respondents would grasp at the opportunity to go through the process of doing research again. Other participants shared that this experience has demotivated them to perform research in the future, but given the opportunity to improve the circumstances surrounding the research process they would consider engaging with research again. Summary of key findings The successful and timely completion of the research assignment for the Master’s component of the Family Medicine training programme in South Africa is a complex and interdependent process . Multiple factors are important and interact: the registrar’s prior exposure to research, the teaching of research skills, the academic institutional processes, the individual experience of performing the research assignment, and the balancing of clinical duties with academic and personal responsibilities. Furthermore, the registrar–supervisor relationship was a critical component. External unpredictable factors, such as the COVID-19 pandemic and social unrest, could also impact on the process. As shown in , the interconnectedness of these factors is evident. The training context and health system, where the registrar worked, also contributed to the overall research experience. Universities that focussed on offering formal teaching appropriate to registrars, a collaboratively selected registrar–supervisor relationship with adequate institutional support facilitated an improved research experience. These multiple factors and the complexity of the process have been recognised in other international studies also. , , Discussion of key findings Registrars are usually building on prior learning when they develop their clinical skills, but this was not the case with the research assignment. Although registrars are enrolled for a Master’s degree, they have minimal research experience as an undergraduate, and are less prepared than other Masters-level students. , Supervisors appeared to have an incorrect assumption that registrars had prior learning, whereas most were coming to this as a completely new and daunting task. This led to feelings of inadequacy, incompetence and of being overwhelmed. For many, this research assignment was their first interaction with the research process and could influence their future professional identity as a researcher. Family physicians have a role to fulfil as researchers in the district healthcare system and being competent in performing research contributes to excellence in their roles as mentors, teachers, clinical governance leaders and capacity builders. , , The development of research competencies is essential to success, and should be intentional, structured and incremental. Teaching programmes need to focus on building research competencies. , Limited research and supervisory capability have been identified as a key factor in several specialist training programmes. , There is a need to build such capacity through training opportunities, collaborative cohort models of supervision, further training at a doctoral level, practice-based research networks and mentoring. Although participants reported formal teaching of research competencies, there was no standardised format across training programmes. A lack of a uniform approach and agreement on what is required is a widespread problem. It appeared that a focus on the practical steps of completing the research was more useful than a didactic, theory-based approach to research that was unrelated to the immediate task. Teaching should recognise the lack of prior learning, focus on methods, be appropriate to the disciplinary field, and tailored to the learning needs. Completion of a submission-ready manuscript as the final product shortens the time taken to complete research by eight months and enables publication. Modular approaches, forums to present projects and blended learning with digital technology may be useful educational strategies. , The institutional processes often led to delays in the completion of the research assignment. Various components were identified: a time-consuming ethics application process, an equally lengthy process to apply for provincial approval, lack of availability of transcription and translation services, and a lack of financial assistance. Even though participants managed to navigate the lack of financial assistance, this has been recognised as a significant barrier. Registrars have a need for technical assistance in navigating the research journey that may, for example, include administrative or statistical support. The process of appointing the supervisor, the accessibility and responsiveness of the supervisor, the quality of feedback, and their competing responsibilities, all contributed to completion of the research assignment. In the case of a supervisor with adequate expertise, effective communication, constructive feedback and role modelling, the respondents reported an improved research experience. The importance of these characteristics of effective supervisors has been noted elsewhere. , External and unpredictable factors also contributed to various delays in the research process. The COVID-19 pandemic impacted many of the respondents as many research projects were put on hold and teaching was also disrupted. In the South African context, it is likely that future challenges may involve community protest and unrest, or climate-related challenges. , , Registrars occupy a clinical role with clinical responsibilities, and the balancing act between clinical duties, a heavy academic workload, commuted overtime, family responsibilities and performing research was seen as very challenging. The burden of clinical responsibilities and a lack of time are identified as the commonest obstacles to completing the research assignment. , The health services need to acknowledge the time required for the research component and allow dedicated blocks of time and leave for this purpose. , Delay in completing the research assignment is one of the key factors reducing the throughput of registrars and the supply of new family physicians. Improving the research process, therefore, will contribute to meeting the goals set by the SAAFP for an increased supply of family physicians at district hospitals and health centres. Improving the quality of research will also avoid the trap of producing a stream of low quality unreliable or invalid evidence that may do more harm than good when published or presented to policymakers. Strengths and limitations Extreme case purposive sampling enabled a balance of respondents between those who did and did not complete their research on time. Saturation of data was achieved before all the planned interviews were conducted. The researcher conducted the interviews herself, was familiar with the content, and judged that no new themes were emerging in the last three interviews. Although all nine training programmes were represented in the data, three programmes had two respondents, and six had only one respondent. The three universities with multiple respondents had the largest output of new graduates (personal communication from SAAFP) and registrars with both completed and uncompleted research were interviewed. It is likely, therefore, that the findings are a valid exploration of the research experience. Overall, seven respondents had completed their research and five had not. Two of the final interviews were with those that had not completed and contributed to the decision on saturation of data. The researcher, who also acted as an interviewer for data collection, is a registrar in family medicine and was going through the process of performing her own research during the data collection phase. Although extra care was taken to remain neutral and to prevent her own opinions and experiences to influence the data, the researcher is aware of possibly influencing the interpretation of the views and insights shared during the interview process. Two of the respondents were also known to the researcher and could influence the interpretation of the data. Reflecting on the researcher acting as interviewer, the advantages included an improved understanding of the context of the interviewee. The second author (R.J.M.) supervised the interview and analysis processes, which ensured a high level of reflexivity throughout and ameliorated any loss of objectivity. Because the research was performed across all training programmes in South Africa, the findings should be transferable to these programmes. Although the data is only from the South African context, the findings could be transferred to training programmes in similar contexts in other African, low- or middle-income countries. Implications The following recommendations are evident from the findings: Teaching should focus on assisting registrars with the incremental and practical steps involved in the research journey and provide sufficient theory to support completion of these tasks. As opposed to generic and didactic teaching about research methodology that is unconnected to the registrar’s actual study design and learning needs at that moment. Teachers should be aware of the lack of prior learning in this area and the need to deconstruct the research journey so that it is less overwhelming. The registrar–supervisor relationship is critical. The supervisor must have sufficient research expertise and be responsive to communications. Registrars would also value shared decision making in the appointment of supervisors. Institutions should provide ongoing faculty development in postgraduate supervision. Institutions need to improve the timeliness of ethics review as registrars have a limited timeframe and their studies are usually small scale and of low ethical risk. Provinces should ensure that registrar research is supported, by giving permissions in a timely fashion and enabling opportunities for dedicated time (e.g. special leave) for key aspects of the research. Managers and supervisors should be more aware of the difficulties in balancing personal, professional, and academic responsibilities. Registrars would have appreciated more opportunities for financial assistance with research costs from the universities. Expanding the research question, to include the views of the supervisors, would be of value to explore this phenomenon in further depth. Further research could quantify the issues raised here, and this could provide additional evidence for training programmes. This study did not specifically explore differences between rural and urban training complexes, and this could be a focus of future research. It would also be of interest to replicate this work in other specialist training programmes and determine if these issues apply more broadly. The successful and timely completion of the research assignment for the Master’s component of the Family Medicine training programme in South Africa is a complex and interdependent process . Multiple factors are important and interact: the registrar’s prior exposure to research, the teaching of research skills, the academic institutional processes, the individual experience of performing the research assignment, and the balancing of clinical duties with academic and personal responsibilities. Furthermore, the registrar–supervisor relationship was a critical component. External unpredictable factors, such as the COVID-19 pandemic and social unrest, could also impact on the process. As shown in , the interconnectedness of these factors is evident. The training context and health system, where the registrar worked, also contributed to the overall research experience. Universities that focussed on offering formal teaching appropriate to registrars, a collaboratively selected registrar–supervisor relationship with adequate institutional support facilitated an improved research experience. These multiple factors and the complexity of the process have been recognised in other international studies also. , , Registrars are usually building on prior learning when they develop their clinical skills, but this was not the case with the research assignment. Although registrars are enrolled for a Master’s degree, they have minimal research experience as an undergraduate, and are less prepared than other Masters-level students. , Supervisors appeared to have an incorrect assumption that registrars had prior learning, whereas most were coming to this as a completely new and daunting task. This led to feelings of inadequacy, incompetence and of being overwhelmed. For many, this research assignment was their first interaction with the research process and could influence their future professional identity as a researcher. Family physicians have a role to fulfil as researchers in the district healthcare system and being competent in performing research contributes to excellence in their roles as mentors, teachers, clinical governance leaders and capacity builders. , , The development of research competencies is essential to success, and should be intentional, structured and incremental. Teaching programmes need to focus on building research competencies. , Limited research and supervisory capability have been identified as a key factor in several specialist training programmes. , There is a need to build such capacity through training opportunities, collaborative cohort models of supervision, further training at a doctoral level, practice-based research networks and mentoring. Although participants reported formal teaching of research competencies, there was no standardised format across training programmes. A lack of a uniform approach and agreement on what is required is a widespread problem. It appeared that a focus on the practical steps of completing the research was more useful than a didactic, theory-based approach to research that was unrelated to the immediate task. Teaching should recognise the lack of prior learning, focus on methods, be appropriate to the disciplinary field, and tailored to the learning needs. Completion of a submission-ready manuscript as the final product shortens the time taken to complete research by eight months and enables publication. Modular approaches, forums to present projects and blended learning with digital technology may be useful educational strategies. , The institutional processes often led to delays in the completion of the research assignment. Various components were identified: a time-consuming ethics application process, an equally lengthy process to apply for provincial approval, lack of availability of transcription and translation services, and a lack of financial assistance. Even though participants managed to navigate the lack of financial assistance, this has been recognised as a significant barrier. Registrars have a need for technical assistance in navigating the research journey that may, for example, include administrative or statistical support. The process of appointing the supervisor, the accessibility and responsiveness of the supervisor, the quality of feedback, and their competing responsibilities, all contributed to completion of the research assignment. In the case of a supervisor with adequate expertise, effective communication, constructive feedback and role modelling, the respondents reported an improved research experience. The importance of these characteristics of effective supervisors has been noted elsewhere. , External and unpredictable factors also contributed to various delays in the research process. The COVID-19 pandemic impacted many of the respondents as many research projects were put on hold and teaching was also disrupted. In the South African context, it is likely that future challenges may involve community protest and unrest, or climate-related challenges. , , Registrars occupy a clinical role with clinical responsibilities, and the balancing act between clinical duties, a heavy academic workload, commuted overtime, family responsibilities and performing research was seen as very challenging. The burden of clinical responsibilities and a lack of time are identified as the commonest obstacles to completing the research assignment. , The health services need to acknowledge the time required for the research component and allow dedicated blocks of time and leave for this purpose. , Delay in completing the research assignment is one of the key factors reducing the throughput of registrars and the supply of new family physicians. Improving the research process, therefore, will contribute to meeting the goals set by the SAAFP for an increased supply of family physicians at district hospitals and health centres. Improving the quality of research will also avoid the trap of producing a stream of low quality unreliable or invalid evidence that may do more harm than good when published or presented to policymakers. Extreme case purposive sampling enabled a balance of respondents between those who did and did not complete their research on time. Saturation of data was achieved before all the planned interviews were conducted. The researcher conducted the interviews herself, was familiar with the content, and judged that no new themes were emerging in the last three interviews. Although all nine training programmes were represented in the data, three programmes had two respondents, and six had only one respondent. The three universities with multiple respondents had the largest output of new graduates (personal communication from SAAFP) and registrars with both completed and uncompleted research were interviewed. It is likely, therefore, that the findings are a valid exploration of the research experience. Overall, seven respondents had completed their research and five had not. Two of the final interviews were with those that had not completed and contributed to the decision on saturation of data. The researcher, who also acted as an interviewer for data collection, is a registrar in family medicine and was going through the process of performing her own research during the data collection phase. Although extra care was taken to remain neutral and to prevent her own opinions and experiences to influence the data, the researcher is aware of possibly influencing the interpretation of the views and insights shared during the interview process. Two of the respondents were also known to the researcher and could influence the interpretation of the data. Reflecting on the researcher acting as interviewer, the advantages included an improved understanding of the context of the interviewee. The second author (R.J.M.) supervised the interview and analysis processes, which ensured a high level of reflexivity throughout and ameliorated any loss of objectivity. Because the research was performed across all training programmes in South Africa, the findings should be transferable to these programmes. Although the data is only from the South African context, the findings could be transferred to training programmes in similar contexts in other African, low- or middle-income countries. The following recommendations are evident from the findings: Teaching should focus on assisting registrars with the incremental and practical steps involved in the research journey and provide sufficient theory to support completion of these tasks. As opposed to generic and didactic teaching about research methodology that is unconnected to the registrar’s actual study design and learning needs at that moment. Teachers should be aware of the lack of prior learning in this area and the need to deconstruct the research journey so that it is less overwhelming. The registrar–supervisor relationship is critical. The supervisor must have sufficient research expertise and be responsive to communications. Registrars would also value shared decision making in the appointment of supervisors. Institutions should provide ongoing faculty development in postgraduate supervision. Institutions need to improve the timeliness of ethics review as registrars have a limited timeframe and their studies are usually small scale and of low ethical risk. Provinces should ensure that registrar research is supported, by giving permissions in a timely fashion and enabling opportunities for dedicated time (e.g. special leave) for key aspects of the research. Managers and supervisors should be more aware of the difficulties in balancing personal, professional, and academic responsibilities. Registrars would have appreciated more opportunities for financial assistance with research costs from the universities. Expanding the research question, to include the views of the supervisors, would be of value to explore this phenomenon in further depth. Further research could quantify the issues raised here, and this could provide additional evidence for training programmes. This study did not specifically explore differences between rural and urban training complexes, and this could be a focus of future research. It would also be of interest to replicate this work in other specialist training programmes and determine if these issues apply more broadly. The successful and timely completion of the research assignment is a complex problem. Lack of prior exposure to research made the assignment feel overwhelming, and this needs to be addressed in both teaching and supervision. The supervisor–registrar relationship was central to success and supervisor’s needed expertise and responsiveness. Registrars would like more shared decision-making in the appointment of supervisors. Formal teaching should be tailored to the practical steps and tasks of the research journey, and not just provide generic didactic teaching on research methodology. Institutions need to be supportive, through efficient processes for ethics, permissions and opportunities for small scale funding. Strategies are needed to cope with the competing demands of clinical work, personal life, academic tasks and research – dedicated time and special leave can assist with this. Training programmes should take note of the issues raised by registrars and consider revisions to their teaching and management of the research journey.
Oral Administration of East Asian Herbal Medicine for Inflammatory Skin Lesions in Plaque Psoriasis: A Systematic Review, Meta-Analysis, and Exploration of Core Herbal Materials
018756bc-f6e2-47de-9289-8f2138e9c2ea
9230602
Pharmacology[mh]
Psoriasis is an inflammatory autoimmune skin disease with various clinical manifestations, and there are millions of these patients worldwide . The prevalence of this disease is reported differently in each country, and the overall prevalence is known to be between 0.14% and 1.99% . Most patients with psoriasis are exposed to very negative psychological effects due to skin findings in exposed areas, such as the face and limbs, as well as shortened life expectancy due to complications of the disease . The seriousness of the problem is also highlighted by the research findings, which show that more than 20% of psoriasis patients are depressed, which can lead to suicidal conduct in severe situations . In addition, recent studies have reported that psoriasis is associated with various chronic diseases that can negatively affect life expectancies, such as psoriatic arthritis, hypertension, type 2 diabetes, dyslipidemia, myocardial infarction, and stroke . This means that psoriasis should be regarded as a systemic disease that can increase the social burden beyond a focal aesthetic problem for individual patients . Therefore, it is a very important medical task at present to find a way to reduce the physical, social, and psychological problems caused by psoriasis through active medical management. There are numerous clinical phenotypes of psoriasis, but plaque psoriasis, also known as psoriasis vulgaris, accounts for around 80% to 90% of cases . Plaque can be expressed in a wide variety of thicknesses and sizes, and often appears as skin lesions accompanied by scales on the face, elbow, lumbosacral region, and scalp . In mild cases where these plaques are less than 3–5% of the body surface, topical therapy or phototherapy can often be helpful . However, for moderate-to-severe plaque psoriasis, oral systemic medications are required . Oral agents that have been commonly used for severe plaque psoriasis include acitretin, apremilast, ciclosporin, methotrexate, etc. . Recently, many biologics targeting a specific pathway of the immune system have been developed . Even though many of these conventional medicines (CM) already exist, there are still problems that need improvement with respect to systemic therapy for psoriasis. For example, acitretin is contraindicated in women of childbearing age due to teratogenicity, and mild side effects such as dose-dependent hair loss and xerosis have been reported . Meanwhile, methotrexate, which has been used for a long time, also has adverse effects such as hepatotoxicity and bone marrow suppression that can lead to cirrhosis . Although biologics report improved effects compared to conventional oral drugs, there are still a not small proportion of patients who do not respond to medication at all. On the other hand, the cost of these drugs is also a significant factor that lowers adherence to treatment and lowers accessibility. Therefore, additional research on new drugs for the treatment of psoriasis with improved cost-effectiveness while having efficacy and safety not inferior to existing CMs is a subject of sufficient value. East Asian herbal medicine (EAHM) refers to natural materials and theories used as medicines for the treatment of diseases in many countries in East Asia, including Korea, China, Taiwan, and Japan . EAHM has a distinct prescribing principle that has been developed during many years of use . In addition, it is distinctly different from natural materials in other regions of the world in that many of the same medicinal herbs appear in the pharmacopeia of East Asian countries. EAHM is not only being actively used in actual clinical practice, but also can be a useful resource for the discovery of new drugs based on accumulated experience and research . For the treatment of psoriasis, a considerable amount of evidence on the efficacy and safety of EAHM has already been established through previous studies . Looking at these, it is easy to confirm that EAHM offers evident therapeutic benefits in terms of the severity of psoriasis-related skin damage, and treatment response rate, and is a relatively safe intervention. Meanwhile, although the mechanism of psoriasis has not been fully elucidated, it is known that a wide variety of inflammation-related pathways are involved in pathogenesis. Given this, it is logical to expect EAHM, whose basic mechanism is a multi-component/multi-target action, to be helpful in modifying the immune system and systemic inflammatory states linked to psoriasis manifestation . Despite the positive potential of EAHM for the treatment of psoriasis, there are problems to be solved first in the process of developing it into a useful drug. First of all, EAHM has the characteristic of being used in the form of a polyherbal formulation tailored to the individual patient’s findings, which is an important difference from herbal medicine in other regions of the world . In this regard, EAHM’s pharmacological activity of individual herbs as well as the synergistic effect obtained from the combination of several herbs is a key therapeutic mechanism . For this reason, it is not easy to select candidate materials with appropriate indications and mechanisms for the treatment of specific diseases among numerous EAHM. Narrowing the field of view to meta-analysis level evidence, several studies have dealt with the effects of EAHM monotherapy and EAHM and other intervention combination therapy simultaneously without distinguishing them. Moreover, in numerous studies verifying the effect of EAHM on psoriasis, discussions of various formulations and routes such as fumigation and ointments other than oral preparations are mixed. This suggests that it is difficult to see that the evidence for EAHM monotherapy with a specific route of administration has been established robustly. Therefore, at the present time, it is necessary to evaluate the efficacy and safety of EAHM for psoriasis based on a more rigorous study design for the route of administration and control group to be compared and to derive meaningful new drug candidate materials based on this data. In accordance with the above recognition, we conducted a study according to the following objectives to provide clinicians with a clearer range of evidence, and at the same time, achieve the objective of exploring useful hypotheses for drug discovery: (1) efficacy and safety of EAHM monotherapy with the oral route of administration in inflammatory skin lesions of psoriasis are evaluated through the systematic review without limitation in scope. (2) Data mining on the herb data collected through this review is performed to derive a hypothesis related to the core EAHM material for psoriasis. This study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis 2020 statement . The protocol of this systematic review was registered in PROSPERO (Registration Number: CRD42022296837, available from: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022296837 , accessed on 14 May 2022). 2.1. Search Strategy Randomized controlled trials (RCT) that evaluated the efficacy and safety of EAHM monotherapy for plaque psoriasis were searched in the following 10 electronic databases from their inception until 29 July 2021: three English databases (PubMed, Cochrane Library, EMBASE), four Korean databases (Korean Studies Information Service System (KISS), Research Information Service System (RISS), Oriental Medicine Advanced Searching Integrated System (OASIS), Korea Citation Index (KCI)), two Chinese databases (Chinese National Knowledge Infrastructure Database (CNKI), Wanfang data), one Japanese database (CiNii). The following Boolean format was used for the search: (Psoriasis[Mesh]) AND ((Psoriases[Title/Abstract]) OR (Pustulosis of Palms[Title/Abstract] AND Soles[Title/Abstract]) OR (Pustulosis Palmaris et Plantaris[Title/Abstract]) OR (Palmoplantaris Pustulosis[Title/Abstract]) OR (Pustular Psoriasis of Palms[Title/Abstract] AND Soles[Title/Abstract])) AND (“Plants, Medicinal”[MeSH] OR “Drugs, Chinese Herbal”[MeSH] OR “Medicine, Chinese Traditional”[MeSH] OR “Medicine, Kampo”[MeSH] OR “Medicine, Korean Traditional”[MeSH] OR “Herbal Medicine”[MeSH] OR “Prescription Drugs”[MeSH] OR “traditional Korean medicine”[Title/abstract] OR “traditional Chinese medicine”[Title/abstract] OR “traditional oriental medicine”[Title/abstract] OR “Kampo medicine”[Title/abstract] OR herb*[Title/abstract] OR decoction*[Title/abstract] OR botanic*[Title/abstract]). In Korean, Chinese, and Japanese databases, these search terms were appropriately modified to perform a search. Detailed search strategies are explicated in . 2.2. Study Selection 2.2.1. Type of Studies Only RCTs evaluating the efficacy and safety of oral administration of EAHM for plaque psoriasis were included. There were no restrictions on language and publication time. Some studies were excluded if they met the following criteria: (a) not RCT or quasi RCT; (b) not related plaque psoriasis or related disease; (c) primary intervention is not related EAHM; (d) not oral administration; (e) not clinical studies; (f) case reports or review; (g) not published in scientific peer-reviewed journals, including postgraduate theses or dissertations, and (h) when the experimental intervention is not EAHM monotherapy, such as combined therapy with conventional medicine. 2.2.2. Type of Participants Trials were considered eligible for inclusion if they were conducted in patients with psoriasis, with no restriction on age, gender, or race. Since the subject of this review is plaque psoriasis, clinical trials that include patients with other subtypes of psoriasis such as psoriatic arthritis, guttate psoriasis, palmoplantar pulposus, and erythrodermic psoriasis were excluded from the review. 2.2.3. Type of Interventions RCTs that compared EAHM as the active intervention in the treatment group versus placebo or CM in the control group were included. All forms of EAHM such as decoction, granule, capsule, compound preparation for the psoriasis treatment were included. There were no restrictions on the dose and duration of treatment for EAHM, but the mode of delivery was limited to oral intake. Studies in which East Asian medical interventions such as acupuncture, massage, or non-drug therapy were only combined in the treatment group were excluded. Studies in which the comparators included other EAHMs were excluded. Additionally, studies that were unable to verify the composition of specific herbal constituents that comprised the EAHM prescription utilized were omitted. 2.2.4. Type of Outcome Measures The response rate of patients whose psoriasis area severity index improved by greater than 60% (PASI 60) and 70% (PASI 70), respectively, was employed as the primary endpoint. Meanwhile, the absolute difference between groups in PASI score was also used as the primary outcome. Secondary outcomes include tumor necrosis factor alpha (TNF-α), Dermatology Life Quality Index (DLQI), Interlukin-17 (IL-17), Interlukin-23 (IL-23). In addition, to evaluate the safety of the intervention for psoriasis patients, the incidence of adverse events (AEs) was also included as a secondary outcome. 2.2.5. Data Extraction The titles and abstracts of potentially eligible studies were independently screened by 2 investigators (HGJ, HK) according to the above-mentioned search strategy. Afterward, a full-text review was performed based on the inclusion and exclusion criteria. Subsequently, information on the included studies was extracted independently by 2 reviewers (HGJ, HK). The following information was collected: title, author’s name, clinical trial conducted country, diagnostic criteria, trial design publication year, sample size, participant age, sex distribution, interventions in the treatment and comparators, treatment duration, outcome index, reported adverse event, and composition with the dosage of EAHM. Any discrepancy was discussed with the third author (DL). 2.2.6. Methodological Quality Assessment The methodological quality of each included study was evaluated independently by 2 investigators (HGJ, HK) according to the revised tool for risk of bias in randomized trials, Rob 2.0 . It is comprised of five domains: bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in selection of the reported results. Methodological quality was assessed on three levels: “High risk of bias”, “Low risk of bias” and “Some concerns”. Disagreements between the two investigators were resolved with the help of the third author (DL). 2.2.7. Statistical Analysis Evidence Synthesis Evidence synthesis of included studies with available data was performed by calculating the effect size and 95% CI using only the random effect model. Heterogeneity was considered statistically significant when the p -value based on the χ 2 test was less than 0.10 or I 2 was 50% or more. Two-sided p < 0.05 was considered statistically significant. Statistical synthesis of individual research results was performed in the software R version 4.1.2 and R studio program (Version 1.4.1106, Integrated Development for R. RStudio, PBC, Boston, MA, USA) using the default settings of the “meta” and “metafor” package . The studies were grouped according to the type of intervention such as EAHM and comparator such as CM or placebo. Relative risk (RR) and 95% confidence interval (CI) were calculated for PASI 60 and PASI 70. Mean difference (MD) and 95% CIs were calculated for continuous PASI score and DLQI. For TNF-α, IL-17, and IL-23, standardized mean difference (SMD) and 95% CIs were calculated to integrate the results of several types of indicators for the same measurement target. Because the probability of an event that occurs was so much lower than other outcomes, and it was required to infer a causal relationship, AE was computed using odds ratio (OR). In this review, in order to effectively reveal the exact value of the effect size without relying only on the p < 0.05 significance threshold in the interpretation of the primary outcome synthesis result, a drapery plot was additionally illustrated along with the forest plot . In the meta-analysis results, if heterogeneity was confirmed in an outcome that synthesized the results of more than 10 trials, the following additional analysis was performed to find out the cause. First, sensitivity analysis was performed according to the leave-one-out method to determine whether there was an effect by outliers in the included data. If no outliers are identified, after performing meta-regression analysis for the following three moderators specified in advance: (i) type of comparator, (ii) source of investigational medication, and (iii) type of EAHM formulation on the factors that had a substantial impact on the result, subgroup analyses were conducted. In order to distinguish publication bias, a contour-enhanced funnel plot was used for the outcome that included most studies . For the asymmetry on the visually confirmed funnel plot, Egger’s test and Begg’s test were additionally performed to specifically confirm the existence of publication bias. Hierarchical Agglomerative Clustering The EAHM prescriptions used in each study reflect the medical goal of maximizing the synergy effect of the core herb combination. Therefore, hierarchical cluster analysis was used to understand the structure of the EAHM prescriptions used in individual studies. The analysis utilized in this study is agglomerative clustering, in which each observation is initially considered as a cluster of its own (leaf). Then, the most similar clusters are successfully merged until there is just one single big cluster (root). The dissimilarity between individual herb constituents was considered as an individual distance, and the Euclidian distance was used as a measure of this. This corresponds to the shortest distance when it is assumed that the difference between each characteristic value is expressed on the coordinate plane. (1) d ( χ i , χ k ) = ∑ j = 1 p ( χ i j − χ k j ) 2 Cluster analysis in this study was performed on herbal constituents that showed a frequency of occurrence of at least 20% compared to the total included clinical trials. Social Network Analysis To explore the interdependence of fundamental herbal constituents utilized in the EAHM prescription and to uncover the core material of connection, a social network analysis was performed on the herb data of individual studies in this review. On the surface, the “complexity” discussed in social network analysis looks to be perplexing, yet it is a term that suggests that an order based on the interrelationships of the constituent pieces exists. EAHM’s prescription is an excellent illustration of the above-mentioned intricacy since it is guided by a combination of strict dosage principles and the tacit understanding of physicians who have worked with them for a long period. For this reason, the network analysis methodology has already been used in various ways in research analyzing EAHM . Social network analysis in this review focused on two aspects. First, an undirected network was assumed, and the degree distribution was observed for the connectivity between the frequent herbal materials used in each EAHM prescription. In this case, since an undirected network is assumed, the average connection degree can be expressed as follows. (2) A = ∑ K = 1 n k P ( k ) = 2 E n ( n: number of nodes, E: number of links ) Second, centrality was measured to identify herb materials with relatively large influence by comparing the influence of specific herbal medicines in the relationship between frequent herbs. Eigenvector centrality was used as the scale for the measurement that reflects the relationship between the individual herbs of EAHM that are prescribed at the same time. This scale can be expressed as: (3) C i = 1 λ ∑ j ∈ N ( i ) A i j C j λ is the eigenvalue of herb i , a constant measured by the algorithm, and N( i ) is the set of neighboring herbs of herb i . A ij becomes “1” if herb i and j have a connection in the n × n -direction adjacency matrix A , and “0” if there is no connection. In the case of C j , it is the eigenvector centrality value of herb j , which is herb i and neighboring herbs. 2.2.8. Quality of Evidence According to Outcome Measurements The overall quality of evidence for each outcome was evaluated according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) pro . The GRADE assessment evaluates the overall quality of evidence in four levels: very low, low, moderate, and high. The level of evidence is lowered according to factors, such as the risk of bias, inconsistency, indirectness, imprecision, and publication bias, respectively. Randomized controlled trials (RCT) that evaluated the efficacy and safety of EAHM monotherapy for plaque psoriasis were searched in the following 10 electronic databases from their inception until 29 July 2021: three English databases (PubMed, Cochrane Library, EMBASE), four Korean databases (Korean Studies Information Service System (KISS), Research Information Service System (RISS), Oriental Medicine Advanced Searching Integrated System (OASIS), Korea Citation Index (KCI)), two Chinese databases (Chinese National Knowledge Infrastructure Database (CNKI), Wanfang data), one Japanese database (CiNii). The following Boolean format was used for the search: (Psoriasis[Mesh]) AND ((Psoriases[Title/Abstract]) OR (Pustulosis of Palms[Title/Abstract] AND Soles[Title/Abstract]) OR (Pustulosis Palmaris et Plantaris[Title/Abstract]) OR (Palmoplantaris Pustulosis[Title/Abstract]) OR (Pustular Psoriasis of Palms[Title/Abstract] AND Soles[Title/Abstract])) AND (“Plants, Medicinal”[MeSH] OR “Drugs, Chinese Herbal”[MeSH] OR “Medicine, Chinese Traditional”[MeSH] OR “Medicine, Kampo”[MeSH] OR “Medicine, Korean Traditional”[MeSH] OR “Herbal Medicine”[MeSH] OR “Prescription Drugs”[MeSH] OR “traditional Korean medicine”[Title/abstract] OR “traditional Chinese medicine”[Title/abstract] OR “traditional oriental medicine”[Title/abstract] OR “Kampo medicine”[Title/abstract] OR herb*[Title/abstract] OR decoction*[Title/abstract] OR botanic*[Title/abstract]). In Korean, Chinese, and Japanese databases, these search terms were appropriately modified to perform a search. Detailed search strategies are explicated in . 2.2.1. Type of Studies Only RCTs evaluating the efficacy and safety of oral administration of EAHM for plaque psoriasis were included. There were no restrictions on language and publication time. Some studies were excluded if they met the following criteria: (a) not RCT or quasi RCT; (b) not related plaque psoriasis or related disease; (c) primary intervention is not related EAHM; (d) not oral administration; (e) not clinical studies; (f) case reports or review; (g) not published in scientific peer-reviewed journals, including postgraduate theses or dissertations, and (h) when the experimental intervention is not EAHM monotherapy, such as combined therapy with conventional medicine. 2.2.2. Type of Participants Trials were considered eligible for inclusion if they were conducted in patients with psoriasis, with no restriction on age, gender, or race. Since the subject of this review is plaque psoriasis, clinical trials that include patients with other subtypes of psoriasis such as psoriatic arthritis, guttate psoriasis, palmoplantar pulposus, and erythrodermic psoriasis were excluded from the review. 2.2.3. Type of Interventions RCTs that compared EAHM as the active intervention in the treatment group versus placebo or CM in the control group were included. All forms of EAHM such as decoction, granule, capsule, compound preparation for the psoriasis treatment were included. There were no restrictions on the dose and duration of treatment for EAHM, but the mode of delivery was limited to oral intake. Studies in which East Asian medical interventions such as acupuncture, massage, or non-drug therapy were only combined in the treatment group were excluded. Studies in which the comparators included other EAHMs were excluded. Additionally, studies that were unable to verify the composition of specific herbal constituents that comprised the EAHM prescription utilized were omitted. 2.2.4. Type of Outcome Measures The response rate of patients whose psoriasis area severity index improved by greater than 60% (PASI 60) and 70% (PASI 70), respectively, was employed as the primary endpoint. Meanwhile, the absolute difference between groups in PASI score was also used as the primary outcome. Secondary outcomes include tumor necrosis factor alpha (TNF-α), Dermatology Life Quality Index (DLQI), Interlukin-17 (IL-17), Interlukin-23 (IL-23). In addition, to evaluate the safety of the intervention for psoriasis patients, the incidence of adverse events (AEs) was also included as a secondary outcome. 2.2.5. Data Extraction The titles and abstracts of potentially eligible studies were independently screened by 2 investigators (HGJ, HK) according to the above-mentioned search strategy. Afterward, a full-text review was performed based on the inclusion and exclusion criteria. Subsequently, information on the included studies was extracted independently by 2 reviewers (HGJ, HK). The following information was collected: title, author’s name, clinical trial conducted country, diagnostic criteria, trial design publication year, sample size, participant age, sex distribution, interventions in the treatment and comparators, treatment duration, outcome index, reported adverse event, and composition with the dosage of EAHM. Any discrepancy was discussed with the third author (DL). 2.2.6. Methodological Quality Assessment The methodological quality of each included study was evaluated independently by 2 investigators (HGJ, HK) according to the revised tool for risk of bias in randomized trials, Rob 2.0 . It is comprised of five domains: bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in selection of the reported results. Methodological quality was assessed on three levels: “High risk of bias”, “Low risk of bias” and “Some concerns”. Disagreements between the two investigators were resolved with the help of the third author (DL). 2.2.7. Statistical Analysis Evidence Synthesis Evidence synthesis of included studies with available data was performed by calculating the effect size and 95% CI using only the random effect model. Heterogeneity was considered statistically significant when the p -value based on the χ 2 test was less than 0.10 or I 2 was 50% or more. Two-sided p < 0.05 was considered statistically significant. Statistical synthesis of individual research results was performed in the software R version 4.1.2 and R studio program (Version 1.4.1106, Integrated Development for R. RStudio, PBC, Boston, MA, USA) using the default settings of the “meta” and “metafor” package . The studies were grouped according to the type of intervention such as EAHM and comparator such as CM or placebo. Relative risk (RR) and 95% confidence interval (CI) were calculated for PASI 60 and PASI 70. Mean difference (MD) and 95% CIs were calculated for continuous PASI score and DLQI. For TNF-α, IL-17, and IL-23, standardized mean difference (SMD) and 95% CIs were calculated to integrate the results of several types of indicators for the same measurement target. Because the probability of an event that occurs was so much lower than other outcomes, and it was required to infer a causal relationship, AE was computed using odds ratio (OR). In this review, in order to effectively reveal the exact value of the effect size without relying only on the p < 0.05 significance threshold in the interpretation of the primary outcome synthesis result, a drapery plot was additionally illustrated along with the forest plot . In the meta-analysis results, if heterogeneity was confirmed in an outcome that synthesized the results of more than 10 trials, the following additional analysis was performed to find out the cause. First, sensitivity analysis was performed according to the leave-one-out method to determine whether there was an effect by outliers in the included data. If no outliers are identified, after performing meta-regression analysis for the following three moderators specified in advance: (i) type of comparator, (ii) source of investigational medication, and (iii) type of EAHM formulation on the factors that had a substantial impact on the result, subgroup analyses were conducted. In order to distinguish publication bias, a contour-enhanced funnel plot was used for the outcome that included most studies . For the asymmetry on the visually confirmed funnel plot, Egger’s test and Begg’s test were additionally performed to specifically confirm the existence of publication bias. Hierarchical Agglomerative Clustering The EAHM prescriptions used in each study reflect the medical goal of maximizing the synergy effect of the core herb combination. Therefore, hierarchical cluster analysis was used to understand the structure of the EAHM prescriptions used in individual studies. The analysis utilized in this study is agglomerative clustering, in which each observation is initially considered as a cluster of its own (leaf). Then, the most similar clusters are successfully merged until there is just one single big cluster (root). The dissimilarity between individual herb constituents was considered as an individual distance, and the Euclidian distance was used as a measure of this. This corresponds to the shortest distance when it is assumed that the difference between each characteristic value is expressed on the coordinate plane. (1) d ( χ i , χ k ) = ∑ j = 1 p ( χ i j − χ k j ) 2 Cluster analysis in this study was performed on herbal constituents that showed a frequency of occurrence of at least 20% compared to the total included clinical trials. Social Network Analysis To explore the interdependence of fundamental herbal constituents utilized in the EAHM prescription and to uncover the core material of connection, a social network analysis was performed on the herb data of individual studies in this review. On the surface, the “complexity” discussed in social network analysis looks to be perplexing, yet it is a term that suggests that an order based on the interrelationships of the constituent pieces exists. EAHM’s prescription is an excellent illustration of the above-mentioned intricacy since it is guided by a combination of strict dosage principles and the tacit understanding of physicians who have worked with them for a long period. For this reason, the network analysis methodology has already been used in various ways in research analyzing EAHM . Social network analysis in this review focused on two aspects. First, an undirected network was assumed, and the degree distribution was observed for the connectivity between the frequent herbal materials used in each EAHM prescription. In this case, since an undirected network is assumed, the average connection degree can be expressed as follows. (2) A = ∑ K = 1 n k P ( k ) = 2 E n ( n: number of nodes, E: number of links ) Second, centrality was measured to identify herb materials with relatively large influence by comparing the influence of specific herbal medicines in the relationship between frequent herbs. Eigenvector centrality was used as the scale for the measurement that reflects the relationship between the individual herbs of EAHM that are prescribed at the same time. This scale can be expressed as: (3) C i = 1 λ ∑ j ∈ N ( i ) A i j C j λ is the eigenvalue of herb i , a constant measured by the algorithm, and N( i ) is the set of neighboring herbs of herb i . A ij becomes “1” if herb i and j have a connection in the n × n -direction adjacency matrix A , and “0” if there is no connection. In the case of C j , it is the eigenvector centrality value of herb j , which is herb i and neighboring herbs. 2.2.8. Quality of Evidence According to Outcome Measurements The overall quality of evidence for each outcome was evaluated according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) pro . The GRADE assessment evaluates the overall quality of evidence in four levels: very low, low, moderate, and high. The level of evidence is lowered according to factors, such as the risk of bias, inconsistency, indirectness, imprecision, and publication bias, respectively. Only RCTs evaluating the efficacy and safety of oral administration of EAHM for plaque psoriasis were included. There were no restrictions on language and publication time. Some studies were excluded if they met the following criteria: (a) not RCT or quasi RCT; (b) not related plaque psoriasis or related disease; (c) primary intervention is not related EAHM; (d) not oral administration; (e) not clinical studies; (f) case reports or review; (g) not published in scientific peer-reviewed journals, including postgraduate theses or dissertations, and (h) when the experimental intervention is not EAHM monotherapy, such as combined therapy with conventional medicine. Trials were considered eligible for inclusion if they were conducted in patients with psoriasis, with no restriction on age, gender, or race. Since the subject of this review is plaque psoriasis, clinical trials that include patients with other subtypes of psoriasis such as psoriatic arthritis, guttate psoriasis, palmoplantar pulposus, and erythrodermic psoriasis were excluded from the review. RCTs that compared EAHM as the active intervention in the treatment group versus placebo or CM in the control group were included. All forms of EAHM such as decoction, granule, capsule, compound preparation for the psoriasis treatment were included. There were no restrictions on the dose and duration of treatment for EAHM, but the mode of delivery was limited to oral intake. Studies in which East Asian medical interventions such as acupuncture, massage, or non-drug therapy were only combined in the treatment group were excluded. Studies in which the comparators included other EAHMs were excluded. Additionally, studies that were unable to verify the composition of specific herbal constituents that comprised the EAHM prescription utilized were omitted. The response rate of patients whose psoriasis area severity index improved by greater than 60% (PASI 60) and 70% (PASI 70), respectively, was employed as the primary endpoint. Meanwhile, the absolute difference between groups in PASI score was also used as the primary outcome. Secondary outcomes include tumor necrosis factor alpha (TNF-α), Dermatology Life Quality Index (DLQI), Interlukin-17 (IL-17), Interlukin-23 (IL-23). In addition, to evaluate the safety of the intervention for psoriasis patients, the incidence of adverse events (AEs) was also included as a secondary outcome. The titles and abstracts of potentially eligible studies were independently screened by 2 investigators (HGJ, HK) according to the above-mentioned search strategy. Afterward, a full-text review was performed based on the inclusion and exclusion criteria. Subsequently, information on the included studies was extracted independently by 2 reviewers (HGJ, HK). The following information was collected: title, author’s name, clinical trial conducted country, diagnostic criteria, trial design publication year, sample size, participant age, sex distribution, interventions in the treatment and comparators, treatment duration, outcome index, reported adverse event, and composition with the dosage of EAHM. Any discrepancy was discussed with the third author (DL). The methodological quality of each included study was evaluated independently by 2 investigators (HGJ, HK) according to the revised tool for risk of bias in randomized trials, Rob 2.0 . It is comprised of five domains: bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in selection of the reported results. Methodological quality was assessed on three levels: “High risk of bias”, “Low risk of bias” and “Some concerns”. Disagreements between the two investigators were resolved with the help of the third author (DL). Evidence Synthesis Evidence synthesis of included studies with available data was performed by calculating the effect size and 95% CI using only the random effect model. Heterogeneity was considered statistically significant when the p -value based on the χ 2 test was less than 0.10 or I 2 was 50% or more. Two-sided p < 0.05 was considered statistically significant. Statistical synthesis of individual research results was performed in the software R version 4.1.2 and R studio program (Version 1.4.1106, Integrated Development for R. RStudio, PBC, Boston, MA, USA) using the default settings of the “meta” and “metafor” package . The studies were grouped according to the type of intervention such as EAHM and comparator such as CM or placebo. Relative risk (RR) and 95% confidence interval (CI) were calculated for PASI 60 and PASI 70. Mean difference (MD) and 95% CIs were calculated for continuous PASI score and DLQI. For TNF-α, IL-17, and IL-23, standardized mean difference (SMD) and 95% CIs were calculated to integrate the results of several types of indicators for the same measurement target. Because the probability of an event that occurs was so much lower than other outcomes, and it was required to infer a causal relationship, AE was computed using odds ratio (OR). In this review, in order to effectively reveal the exact value of the effect size without relying only on the p < 0.05 significance threshold in the interpretation of the primary outcome synthesis result, a drapery plot was additionally illustrated along with the forest plot . In the meta-analysis results, if heterogeneity was confirmed in an outcome that synthesized the results of more than 10 trials, the following additional analysis was performed to find out the cause. First, sensitivity analysis was performed according to the leave-one-out method to determine whether there was an effect by outliers in the included data. If no outliers are identified, after performing meta-regression analysis for the following three moderators specified in advance: (i) type of comparator, (ii) source of investigational medication, and (iii) type of EAHM formulation on the factors that had a substantial impact on the result, subgroup analyses were conducted. In order to distinguish publication bias, a contour-enhanced funnel plot was used for the outcome that included most studies . For the asymmetry on the visually confirmed funnel plot, Egger’s test and Begg’s test were additionally performed to specifically confirm the existence of publication bias. Hierarchical Agglomerative Clustering The EAHM prescriptions used in each study reflect the medical goal of maximizing the synergy effect of the core herb combination. Therefore, hierarchical cluster analysis was used to understand the structure of the EAHM prescriptions used in individual studies. The analysis utilized in this study is agglomerative clustering, in which each observation is initially considered as a cluster of its own (leaf). Then, the most similar clusters are successfully merged until there is just one single big cluster (root). The dissimilarity between individual herb constituents was considered as an individual distance, and the Euclidian distance was used as a measure of this. This corresponds to the shortest distance when it is assumed that the difference between each characteristic value is expressed on the coordinate plane. (1) d ( χ i , χ k ) = ∑ j = 1 p ( χ i j − χ k j ) 2 Cluster analysis in this study was performed on herbal constituents that showed a frequency of occurrence of at least 20% compared to the total included clinical trials. Social Network Analysis To explore the interdependence of fundamental herbal constituents utilized in the EAHM prescription and to uncover the core material of connection, a social network analysis was performed on the herb data of individual studies in this review. On the surface, the “complexity” discussed in social network analysis looks to be perplexing, yet it is a term that suggests that an order based on the interrelationships of the constituent pieces exists. EAHM’s prescription is an excellent illustration of the above-mentioned intricacy since it is guided by a combination of strict dosage principles and the tacit understanding of physicians who have worked with them for a long period. For this reason, the network analysis methodology has already been used in various ways in research analyzing EAHM . Social network analysis in this review focused on two aspects. First, an undirected network was assumed, and the degree distribution was observed for the connectivity between the frequent herbal materials used in each EAHM prescription. In this case, since an undirected network is assumed, the average connection degree can be expressed as follows. (2) A = ∑ K = 1 n k P ( k ) = 2 E n ( n: number of nodes, E: number of links ) Second, centrality was measured to identify herb materials with relatively large influence by comparing the influence of specific herbal medicines in the relationship between frequent herbs. Eigenvector centrality was used as the scale for the measurement that reflects the relationship between the individual herbs of EAHM that are prescribed at the same time. This scale can be expressed as: (3) C i = 1 λ ∑ j ∈ N ( i ) A i j C j λ is the eigenvalue of herb i , a constant measured by the algorithm, and N( i ) is the set of neighboring herbs of herb i . A ij becomes “1” if herb i and j have a connection in the n × n -direction adjacency matrix A , and “0” if there is no connection. In the case of C j , it is the eigenvector centrality value of herb j , which is herb i and neighboring herbs. Evidence synthesis of included studies with available data was performed by calculating the effect size and 95% CI using only the random effect model. Heterogeneity was considered statistically significant when the p -value based on the χ 2 test was less than 0.10 or I 2 was 50% or more. Two-sided p < 0.05 was considered statistically significant. Statistical synthesis of individual research results was performed in the software R version 4.1.2 and R studio program (Version 1.4.1106, Integrated Development for R. RStudio, PBC, Boston, MA, USA) using the default settings of the “meta” and “metafor” package . The studies were grouped according to the type of intervention such as EAHM and comparator such as CM or placebo. Relative risk (RR) and 95% confidence interval (CI) were calculated for PASI 60 and PASI 70. Mean difference (MD) and 95% CIs were calculated for continuous PASI score and DLQI. For TNF-α, IL-17, and IL-23, standardized mean difference (SMD) and 95% CIs were calculated to integrate the results of several types of indicators for the same measurement target. Because the probability of an event that occurs was so much lower than other outcomes, and it was required to infer a causal relationship, AE was computed using odds ratio (OR). In this review, in order to effectively reveal the exact value of the effect size without relying only on the p < 0.05 significance threshold in the interpretation of the primary outcome synthesis result, a drapery plot was additionally illustrated along with the forest plot . In the meta-analysis results, if heterogeneity was confirmed in an outcome that synthesized the results of more than 10 trials, the following additional analysis was performed to find out the cause. First, sensitivity analysis was performed according to the leave-one-out method to determine whether there was an effect by outliers in the included data. If no outliers are identified, after performing meta-regression analysis for the following three moderators specified in advance: (i) type of comparator, (ii) source of investigational medication, and (iii) type of EAHM formulation on the factors that had a substantial impact on the result, subgroup analyses were conducted. In order to distinguish publication bias, a contour-enhanced funnel plot was used for the outcome that included most studies . For the asymmetry on the visually confirmed funnel plot, Egger’s test and Begg’s test were additionally performed to specifically confirm the existence of publication bias. The EAHM prescriptions used in each study reflect the medical goal of maximizing the synergy effect of the core herb combination. Therefore, hierarchical cluster analysis was used to understand the structure of the EAHM prescriptions used in individual studies. The analysis utilized in this study is agglomerative clustering, in which each observation is initially considered as a cluster of its own (leaf). Then, the most similar clusters are successfully merged until there is just one single big cluster (root). The dissimilarity between individual herb constituents was considered as an individual distance, and the Euclidian distance was used as a measure of this. This corresponds to the shortest distance when it is assumed that the difference between each characteristic value is expressed on the coordinate plane. (1) d ( χ i , χ k ) = ∑ j = 1 p ( χ i j − χ k j ) 2 Cluster analysis in this study was performed on herbal constituents that showed a frequency of occurrence of at least 20% compared to the total included clinical trials. To explore the interdependence of fundamental herbal constituents utilized in the EAHM prescription and to uncover the core material of connection, a social network analysis was performed on the herb data of individual studies in this review. On the surface, the “complexity” discussed in social network analysis looks to be perplexing, yet it is a term that suggests that an order based on the interrelationships of the constituent pieces exists. EAHM’s prescription is an excellent illustration of the above-mentioned intricacy since it is guided by a combination of strict dosage principles and the tacit understanding of physicians who have worked with them for a long period. For this reason, the network analysis methodology has already been used in various ways in research analyzing EAHM . Social network analysis in this review focused on two aspects. First, an undirected network was assumed, and the degree distribution was observed for the connectivity between the frequent herbal materials used in each EAHM prescription. In this case, since an undirected network is assumed, the average connection degree can be expressed as follows. (2) A = ∑ K = 1 n k P ( k ) = 2 E n ( n: number of nodes, E: number of links ) Second, centrality was measured to identify herb materials with relatively large influence by comparing the influence of specific herbal medicines in the relationship between frequent herbs. Eigenvector centrality was used as the scale for the measurement that reflects the relationship between the individual herbs of EAHM that are prescribed at the same time. This scale can be expressed as: (3) C i = 1 λ ∑ j ∈ N ( i ) A i j C j λ is the eigenvalue of herb i , a constant measured by the algorithm, and N( i ) is the set of neighboring herbs of herb i . A ij becomes “1” if herb i and j have a connection in the n × n -direction adjacency matrix A , and “0” if there is no connection. In the case of C j , it is the eigenvector centrality value of herb j , which is herb i and neighboring herbs. The overall quality of evidence for each outcome was evaluated according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) pro . The GRADE assessment evaluates the overall quality of evidence in four levels: very low, low, moderate, and high. The level of evidence is lowered according to factors, such as the risk of bias, inconsistency, indirectness, imprecision, and publication bias, respectively. 3.1. Study Identification A total of 2434 studies were retrieved by electronic database search and manual search, among which 638 duplicate documents were removed. After screening the titles and abstracts, 1115 studies were excluded for at least one of the following reasons: (i) not related to psoriasis, (ii) primary intervention not related to EAHM, (iii) not oral administration, (iv) not clinical study (v) review article, (vi) case report or clinical experience, (vii) not a randomized controlled study. As a result of the evaluation of 460 articles for which full text was available among the remaining literature, 404 studies were excluded for the following reasons: (i) quasi-randomized controlled trials, (ii) duplicated documents, (iii) inappropriate study design, (iv) not disclosed herb ingredients, (v) not oral administration, (vi) not published peer-review scientific journal, (vii) not appropriate psoriasis subtype, (viii) not EAHM monotherapy, (ix) suspicion of salami slicing. Finally, 56 published studies were included in this review. shows the results of the database search. 3.2. Study Characteristics The sample size of the included studies ranged from 40 to 260, and a total of 4966 participants were separated into the experimental group (n = 2605) and the control group (n = 2361). The psoriasis subtype in all included studies was psoriasis vulgaris or plaque psoriasis. One study was published in English, and all other studies were published in Chinese. The composition and formulation of the administered EAHM were reported in all studies included in this study. Only one study used a placebo preparation as a control group ; all other trials used CM as the control group. The following is a list of CMs that have been utilized as a control medication: methotrexate, vitamin A, glucocorticoids, and other topical medications including acitretin, compound amino-polypeptide agent, methotrexate, roxithromycin, penicillin, cephalosporin, vitamin A, glucocorticoids, and other topical agents. The duration of treatment in all eligible studies ranged from 2 weeks to 6 months. The characterization of the 56 included studies was summarized in detail in . 3.3. Risk of Bias The methodological quality of 56 included studies was summarized in . The risk of bias in the studies was assessed by the Rob 2.0 tool . The overall risk of bias in all studies was evaluated as “some concern”. This is related to the fact that domain 2, domain 4, and domain 5 were evaluated as “some concern” in all studies except for one study . All studies evaluated as “some concern” in domain 2 and domain 4 did not employ a double-blind design, and it is unclear whether the outcome assessor and the interventionist were clearly separated. In addition to this, it was not possible to confirm the pre-registered protocol in all studies. Due to this common problem, the risk of bias could not be completely excluded in all studies. 3.4. Primary Outcomes 3.4.1. PASI 70 A meta-analysis was performed on 18 studies that reported PASI 70 . The combined results showed that EAHM had a statistically significantly better effect than the CM control group on the improvement of PASI 70 (18 trials, n = 1865; RR: 1.2845, 95% CI: 1.1906 to 1.3858, p < 0.0001; heterogeneity: χ 2 = 21.87, df = 17, I 2 = 22.3%, p = 0.1897; A,B). 3.4.2. PASI 60 A total of 29 studies compared EAHM with CM control regarding the PASI 60 . The pooled effect of EAHM on the PASI 60 was significantly better than the CM control (29 trials, n = 2479; RR: 1.1923, 95% CI: 1.1134 to 1.2769, p < 0.0001; heterogeneity: χ 2 = 101.24, df = 28, I 2 = 72.3%, p < 0.0001; A,B). Only one trial reported the effect of EAHM versus placebo control on PASI 60 . Response rate in PASI 60 was significantly greater for EAHM than the placebo group (one trial, n = 56; RR: 3.7500, 95% CI: 1.4207 to 98983, p = 0.0076). 3.4.3. Continuous PASI Score In the 27 studies comparing the effect of EAHM with CM control, EAHM significantly improved continuous PASI score than CM control (27 trials, n = 2138; MD: −2.3386, 95% CI: −3.3068 to −1.3704, p < 0.0001; heterogeneity: χ 2 = 554.36, df = 26, I 2 = 95.3%, p < 0.0001; A,B) . 3.5. Secondary Outcomes 3.5.1. IL-17, IL-23, TNF-α and DLQI Meta-analysis of four studies showed that EAHMs were significant for reducing IL-17 compared to CM control (four trials, n = 262; SMD: −1.1683, 95% CI: −2.1789 to −0.1577, p = 0.0235; heterogeneity: χ 2 = 40.40, df = 3, I 2 = 92.6%, p < 0.0001; A). IL-17 was also measured by the one trial that compared EAHM with placebo control . A significant reduction in IL-17 level was observed by EAHM (one trial, n = 56; MD: −235.8200 pg/mL, 95% CI: −305.4477 to −166.1923, p < 0.0001). However, there is no significant difference between EAHM and CM control on IL-23 (four trials, n = 262; SMD: −1.3204, 95% CI: −3.0143 to 0.3734, p = 0.1265; heterogeneity: χ 2 = 69.49, df = 3, I 2 = 95.7%, p < 0.0001; B). Seven studies compared the effect of EAHM to CM in reducing TNF-α . Meta-analysis showed that EAHM significantly reduced TNF-α compared to CM control (seven trials, n = 584; SMD: −1.4396, 95% CI: −2.3803 to −0.4990, p = 0.0027; heterogeneity: χ 2 = 81.69, df = 6, I 2 = 92.7%, p < 0.0001; C). DLQI was reported in four trials . Compared with CM control, DLQI was significantly lower in the EAHM group (four trials, n = 259; MD: −3.1161, 95% CI: −4.2796 to −1.9526, p = 0.0001; heterogeneity: χ 2 = 3.72, df = 3, I 2 = 19.4 %, p = 0.2933; D). 3.5.2. AEs Among the included studies, 33 trials (34/56, 60.71%) reported information related to AEs . Four of these studies did not report AEs in the control group, and two studies reported the number of AEs in duplicate. On the other hand, there were five studies that reported AEs in both groups. Therefore, 22 studies were able to synthesize the results by comparing the incidence rate. The aggregated results including 22 trials suggested that the incidences of AEs were significantly reduced by EAHM compared with CM control (22 trials, n = 2066; OR: 0.1017, 95% CI: 0.0630 to 0.1643, p < 0.0001; ). For the incidence rate of AEs, an additional comparison was performed through subgroup analysis according to the type of CM in the control group. Meta-analysis revealed that EAHM had lower incidence of AEs than amino-polypeptide agents (eight trials, n = 871; OR: 0.0939, 95% CI: 0.0399 to 0.2210, p < 0.0001; ). In comparison with acitretin, EAHM also showed a significant reduction in the incidence of AEs (10 trials, n = 976; OR: 0.0820, 95% CI: 0.0413 to 0.1628, p < 0.0001; ). Four studies comparing EAHM with other conventional medicines also showed a significant reduction in the incidence of AEs (four trials, n = 219; OR: 0.2428, 95% CI: 0.0879 to 0.6708, p < 0.0001; ). All the reported AEs were not severe and disappeared without long-term treatment. The details of adverse events reported in each study are recorded in . 3.6. Assessing Heterogeneity 3.6.1. Sensitivity Analysis Considerable heterogeneity was found in the synthesis of trial data using PASI 60 and continuous PASI score outcomes, with I 2 72% and 95%, respectively. In the drapery plot, there were also studies that appeared to be outliers. Accordingly, sensitivity analysis was performed according to the leave-one-out approach to determine whether a specific study corresponding to these outliers was the cause of heterogeneity for the above two results. As a result of the sensitivity analysis, as shown in , each omission for all individual studies did not have a noteworthy effect on heterogeneity change ( A,B). 3.6.2. Meta-Regression and Subgroup Analysis Through sensitivity analysis, it was confirmed that outliers in individual studies did not affect heterogeneity. Hence, in order to identify other potential causes of heterogeneity, a meta-regression analysis was performed on moderators expected to influence the results. The moderators to be evaluated were “type of comparator”, “source of investigational medicine” and “sample size”, and they were applied to the meta-analysis findings of PASI 60 outcome and continuous PASI score, respectively, and analysis was performed. As a result of performing a meta-regression for PASI 60, the type of comparator that was confirmed as a variable had a statistically significant effect on the pooled results ( p = 0.0104; ), but the source of investigational medicine ( p = 0.6945; ) and sample size ( p = 0.8941; ) did not have a statistically significant effect. Neither the type of comparator ( p = 0.1902; ), the source of experimental medicine ( p = 0.5499; ), nor the sample size ( p = 0.4478; ) had a significant influence on the effect size of studies in a meta-regression of pooled results of continuous PASI score. Subgroup analysis indicated that the cause of heterogeneity may be related to the type of comparator . Subgroup analysis was not performed for other predictors that were not significant in meta-regression. Meanwhile, for endpoints other than PASI 60 and continuous PASI score, additional sensitivity analysis, and subgroup analysis were not performed because the heterogeneity of the pooled results was low, or the number of included studies was very small. 3.7. Assessing Publication Bias Contour-enhanced funnel plot, Egger’s test, and Begg’s test were used to assess the potential publication bias of the primary outcomes in this meta-analysis. Asymmetric shapes were observed in the contour-enhanced funnel plots for all outcomes, suggesting potential bias ( A–C). There was no evidence of significant publication bias in both Egger’s test and Begg’s test for PASI 70 (Egger’s test: p = 0.3501; Begg’s test: p = 0.1396). The publication bias was statistically significant in Egger’s test for PASI 60, but not in Begg’s test (Egger’s test: p < 0.0001; Begg’s test: p = 0.8511). Publication bias of continuous PASI score was also significant in Egger’s test, but no significant bias was confirmed in Begg’s test (Egger’s test: p = 0.0027; Begg’s test: p = 0.1038). Overall, there may be a risk of potential publication bias, but it is difficult to say that such findings have been confirmed very clearly. Although no unequivocal evidence showing publication bias was found in the above investigation, the risk of potential publication bias could not be fully eliminated. 3.8. Quality of Evidence According to Outcome Measures In the comparison between EAHM and CM, the overall quality of evidence according to all outcome measures was very low to moderate. The results of the GRADE assessment are presented in . 3.9. Data Mining of EAHM Ingredients 3.9.1. Detailed Information and Distribution of EAHM Ingredients A total of 137 herbs were employed as component materials of the test EAHM in the 56 clinical trials covered in this review. Detailed information on individual EAHM components is summarized in . The following are 16 herbs that were prescribed with a high frequency in more than 20% of the studies included in this review: Rehmanniae Radix Recens; Salviae Miltiorrhizae Radix; Glycyrrhizae Radix et Rhizoma; Moutan Radicis Cortex; Lithospermi Radix; Smilacis Rhizoma; Radix Paeoniae Rubra; Dictamni Radicis Cortex; Imperatae Rhizoma; Hedyotidis Herba; Isatidis Radix; Lonicerae Flos; Sophorae Flos; Scutellariae Radix; Forsythiae Fructus; Spatholobi Caulis. The relative frequencies of these top 16 herbal materials ranged from 21.43% to a maximum of 69.64%. In terms of herb properties, all thirteen herbs, with the exception of three, were classed as cold and had the highest proportion, two herbs were neutral, and one herb had a warm property. Herbal flavors could be classed as bitter or sweet; however, bitter herbs accounted for a bigger part of the total, with nine herbs. Hence, the specific efficacy that clinicians consider when prescribing EAHM is expressed as summary information called the “action category”. The action categories of the 16 high-frequency herbs mentioned above were all classified as “heat-clearing” except for 1. shows the classification information for 16 herbs, including frequency distribution, property, taste, and action category. 3.9.2. Hierarchical Agglomerative Clustering The characters of the top 16 high-frequency herbal materials were investigated using the hierarchical agglomerative cluster method. Through this analysis, pharmacological trends of core EAHMs used in the treatment of inflammatory skin lesions in psoriasis can be identified. The core herbs in this study may be separated into three modules as a result of classification based on the frequency of use and features of individual herbs. The results of classifying herbs are shown in . 3.9.3. Social Network Analysis Social network analysis was used to confirm the mutual relationship between 16 herbs used frequently for inflammatory skin lesions of psoriasis and to identify core materials showing higher centrality in this interrelationship. As a result of graphically expressing the network between each herb, it was found that they are all closely connected, as shown in . In the calculation of eigenvector centrality to measure the prestige centrality of individual herbs, Sophorae Flos and Scutellariae Radix were 0.0593, and all other 14 herbal materials were 0.0630. According to this, the centrality of 16 high-frequency herbs used in more than 20% of trials was generally at a similar level, and it could be interpreted that they were considered closely related to each other in their use in EAHM prescription for psoriasis. A total of 2434 studies were retrieved by electronic database search and manual search, among which 638 duplicate documents were removed. After screening the titles and abstracts, 1115 studies were excluded for at least one of the following reasons: (i) not related to psoriasis, (ii) primary intervention not related to EAHM, (iii) not oral administration, (iv) not clinical study (v) review article, (vi) case report or clinical experience, (vii) not a randomized controlled study. As a result of the evaluation of 460 articles for which full text was available among the remaining literature, 404 studies were excluded for the following reasons: (i) quasi-randomized controlled trials, (ii) duplicated documents, (iii) inappropriate study design, (iv) not disclosed herb ingredients, (v) not oral administration, (vi) not published peer-review scientific journal, (vii) not appropriate psoriasis subtype, (viii) not EAHM monotherapy, (ix) suspicion of salami slicing. Finally, 56 published studies were included in this review. shows the results of the database search. The sample size of the included studies ranged from 40 to 260, and a total of 4966 participants were separated into the experimental group (n = 2605) and the control group (n = 2361). The psoriasis subtype in all included studies was psoriasis vulgaris or plaque psoriasis. One study was published in English, and all other studies were published in Chinese. The composition and formulation of the administered EAHM were reported in all studies included in this study. Only one study used a placebo preparation as a control group ; all other trials used CM as the control group. The following is a list of CMs that have been utilized as a control medication: methotrexate, vitamin A, glucocorticoids, and other topical medications including acitretin, compound amino-polypeptide agent, methotrexate, roxithromycin, penicillin, cephalosporin, vitamin A, glucocorticoids, and other topical agents. The duration of treatment in all eligible studies ranged from 2 weeks to 6 months. The characterization of the 56 included studies was summarized in detail in . The methodological quality of 56 included studies was summarized in . The risk of bias in the studies was assessed by the Rob 2.0 tool . The overall risk of bias in all studies was evaluated as “some concern”. This is related to the fact that domain 2, domain 4, and domain 5 were evaluated as “some concern” in all studies except for one study . All studies evaluated as “some concern” in domain 2 and domain 4 did not employ a double-blind design, and it is unclear whether the outcome assessor and the interventionist were clearly separated. In addition to this, it was not possible to confirm the pre-registered protocol in all studies. Due to this common problem, the risk of bias could not be completely excluded in all studies. 3.4.1. PASI 70 A meta-analysis was performed on 18 studies that reported PASI 70 . The combined results showed that EAHM had a statistically significantly better effect than the CM control group on the improvement of PASI 70 (18 trials, n = 1865; RR: 1.2845, 95% CI: 1.1906 to 1.3858, p < 0.0001; heterogeneity: χ 2 = 21.87, df = 17, I 2 = 22.3%, p = 0.1897; A,B). 3.4.2. PASI 60 A total of 29 studies compared EAHM with CM control regarding the PASI 60 . The pooled effect of EAHM on the PASI 60 was significantly better than the CM control (29 trials, n = 2479; RR: 1.1923, 95% CI: 1.1134 to 1.2769, p < 0.0001; heterogeneity: χ 2 = 101.24, df = 28, I 2 = 72.3%, p < 0.0001; A,B). Only one trial reported the effect of EAHM versus placebo control on PASI 60 . Response rate in PASI 60 was significantly greater for EAHM than the placebo group (one trial, n = 56; RR: 3.7500, 95% CI: 1.4207 to 98983, p = 0.0076). 3.4.3. Continuous PASI Score In the 27 studies comparing the effect of EAHM with CM control, EAHM significantly improved continuous PASI score than CM control (27 trials, n = 2138; MD: −2.3386, 95% CI: −3.3068 to −1.3704, p < 0.0001; heterogeneity: χ 2 = 554.36, df = 26, I 2 = 95.3%, p < 0.0001; A,B) . A meta-analysis was performed on 18 studies that reported PASI 70 . The combined results showed that EAHM had a statistically significantly better effect than the CM control group on the improvement of PASI 70 (18 trials, n = 1865; RR: 1.2845, 95% CI: 1.1906 to 1.3858, p < 0.0001; heterogeneity: χ 2 = 21.87, df = 17, I 2 = 22.3%, p = 0.1897; A,B). A total of 29 studies compared EAHM with CM control regarding the PASI 60 . The pooled effect of EAHM on the PASI 60 was significantly better than the CM control (29 trials, n = 2479; RR: 1.1923, 95% CI: 1.1134 to 1.2769, p < 0.0001; heterogeneity: χ 2 = 101.24, df = 28, I 2 = 72.3%, p < 0.0001; A,B). Only one trial reported the effect of EAHM versus placebo control on PASI 60 . Response rate in PASI 60 was significantly greater for EAHM than the placebo group (one trial, n = 56; RR: 3.7500, 95% CI: 1.4207 to 98983, p = 0.0076). In the 27 studies comparing the effect of EAHM with CM control, EAHM significantly improved continuous PASI score than CM control (27 trials, n = 2138; MD: −2.3386, 95% CI: −3.3068 to −1.3704, p < 0.0001; heterogeneity: χ 2 = 554.36, df = 26, I 2 = 95.3%, p < 0.0001; A,B) . 3.5.1. IL-17, IL-23, TNF-α and DLQI Meta-analysis of four studies showed that EAHMs were significant for reducing IL-17 compared to CM control (four trials, n = 262; SMD: −1.1683, 95% CI: −2.1789 to −0.1577, p = 0.0235; heterogeneity: χ 2 = 40.40, df = 3, I 2 = 92.6%, p < 0.0001; A). IL-17 was also measured by the one trial that compared EAHM with placebo control . A significant reduction in IL-17 level was observed by EAHM (one trial, n = 56; MD: −235.8200 pg/mL, 95% CI: −305.4477 to −166.1923, p < 0.0001). However, there is no significant difference between EAHM and CM control on IL-23 (four trials, n = 262; SMD: −1.3204, 95% CI: −3.0143 to 0.3734, p = 0.1265; heterogeneity: χ 2 = 69.49, df = 3, I 2 = 95.7%, p < 0.0001; B). Seven studies compared the effect of EAHM to CM in reducing TNF-α . Meta-analysis showed that EAHM significantly reduced TNF-α compared to CM control (seven trials, n = 584; SMD: −1.4396, 95% CI: −2.3803 to −0.4990, p = 0.0027; heterogeneity: χ 2 = 81.69, df = 6, I 2 = 92.7%, p < 0.0001; C). DLQI was reported in four trials . Compared with CM control, DLQI was significantly lower in the EAHM group (four trials, n = 259; MD: −3.1161, 95% CI: −4.2796 to −1.9526, p = 0.0001; heterogeneity: χ 2 = 3.72, df = 3, I 2 = 19.4 %, p = 0.2933; D). 3.5.2. AEs Among the included studies, 33 trials (34/56, 60.71%) reported information related to AEs . Four of these studies did not report AEs in the control group, and two studies reported the number of AEs in duplicate. On the other hand, there were five studies that reported AEs in both groups. Therefore, 22 studies were able to synthesize the results by comparing the incidence rate. The aggregated results including 22 trials suggested that the incidences of AEs were significantly reduced by EAHM compared with CM control (22 trials, n = 2066; OR: 0.1017, 95% CI: 0.0630 to 0.1643, p < 0.0001; ). For the incidence rate of AEs, an additional comparison was performed through subgroup analysis according to the type of CM in the control group. Meta-analysis revealed that EAHM had lower incidence of AEs than amino-polypeptide agents (eight trials, n = 871; OR: 0.0939, 95% CI: 0.0399 to 0.2210, p < 0.0001; ). In comparison with acitretin, EAHM also showed a significant reduction in the incidence of AEs (10 trials, n = 976; OR: 0.0820, 95% CI: 0.0413 to 0.1628, p < 0.0001; ). Four studies comparing EAHM with other conventional medicines also showed a significant reduction in the incidence of AEs (four trials, n = 219; OR: 0.2428, 95% CI: 0.0879 to 0.6708, p < 0.0001; ). All the reported AEs were not severe and disappeared without long-term treatment. The details of adverse events reported in each study are recorded in . Meta-analysis of four studies showed that EAHMs were significant for reducing IL-17 compared to CM control (four trials, n = 262; SMD: −1.1683, 95% CI: −2.1789 to −0.1577, p = 0.0235; heterogeneity: χ 2 = 40.40, df = 3, I 2 = 92.6%, p < 0.0001; A). IL-17 was also measured by the one trial that compared EAHM with placebo control . A significant reduction in IL-17 level was observed by EAHM (one trial, n = 56; MD: −235.8200 pg/mL, 95% CI: −305.4477 to −166.1923, p < 0.0001). However, there is no significant difference between EAHM and CM control on IL-23 (four trials, n = 262; SMD: −1.3204, 95% CI: −3.0143 to 0.3734, p = 0.1265; heterogeneity: χ 2 = 69.49, df = 3, I 2 = 95.7%, p < 0.0001; B). Seven studies compared the effect of EAHM to CM in reducing TNF-α . Meta-analysis showed that EAHM significantly reduced TNF-α compared to CM control (seven trials, n = 584; SMD: −1.4396, 95% CI: −2.3803 to −0.4990, p = 0.0027; heterogeneity: χ 2 = 81.69, df = 6, I 2 = 92.7%, p < 0.0001; C). DLQI was reported in four trials . Compared with CM control, DLQI was significantly lower in the EAHM group (four trials, n = 259; MD: −3.1161, 95% CI: −4.2796 to −1.9526, p = 0.0001; heterogeneity: χ 2 = 3.72, df = 3, I 2 = 19.4 %, p = 0.2933; D). Among the included studies, 33 trials (34/56, 60.71%) reported information related to AEs . Four of these studies did not report AEs in the control group, and two studies reported the number of AEs in duplicate. On the other hand, there were five studies that reported AEs in both groups. Therefore, 22 studies were able to synthesize the results by comparing the incidence rate. The aggregated results including 22 trials suggested that the incidences of AEs were significantly reduced by EAHM compared with CM control (22 trials, n = 2066; OR: 0.1017, 95% CI: 0.0630 to 0.1643, p < 0.0001; ). For the incidence rate of AEs, an additional comparison was performed through subgroup analysis according to the type of CM in the control group. Meta-analysis revealed that EAHM had lower incidence of AEs than amino-polypeptide agents (eight trials, n = 871; OR: 0.0939, 95% CI: 0.0399 to 0.2210, p < 0.0001; ). In comparison with acitretin, EAHM also showed a significant reduction in the incidence of AEs (10 trials, n = 976; OR: 0.0820, 95% CI: 0.0413 to 0.1628, p < 0.0001; ). Four studies comparing EAHM with other conventional medicines also showed a significant reduction in the incidence of AEs (four trials, n = 219; OR: 0.2428, 95% CI: 0.0879 to 0.6708, p < 0.0001; ). All the reported AEs were not severe and disappeared without long-term treatment. The details of adverse events reported in each study are recorded in . 3.6.1. Sensitivity Analysis Considerable heterogeneity was found in the synthesis of trial data using PASI 60 and continuous PASI score outcomes, with I 2 72% and 95%, respectively. In the drapery plot, there were also studies that appeared to be outliers. Accordingly, sensitivity analysis was performed according to the leave-one-out approach to determine whether a specific study corresponding to these outliers was the cause of heterogeneity for the above two results. As a result of the sensitivity analysis, as shown in , each omission for all individual studies did not have a noteworthy effect on heterogeneity change ( A,B). 3.6.2. Meta-Regression and Subgroup Analysis Through sensitivity analysis, it was confirmed that outliers in individual studies did not affect heterogeneity. Hence, in order to identify other potential causes of heterogeneity, a meta-regression analysis was performed on moderators expected to influence the results. The moderators to be evaluated were “type of comparator”, “source of investigational medicine” and “sample size”, and they were applied to the meta-analysis findings of PASI 60 outcome and continuous PASI score, respectively, and analysis was performed. As a result of performing a meta-regression for PASI 60, the type of comparator that was confirmed as a variable had a statistically significant effect on the pooled results ( p = 0.0104; ), but the source of investigational medicine ( p = 0.6945; ) and sample size ( p = 0.8941; ) did not have a statistically significant effect. Neither the type of comparator ( p = 0.1902; ), the source of experimental medicine ( p = 0.5499; ), nor the sample size ( p = 0.4478; ) had a significant influence on the effect size of studies in a meta-regression of pooled results of continuous PASI score. Subgroup analysis indicated that the cause of heterogeneity may be related to the type of comparator . Subgroup analysis was not performed for other predictors that were not significant in meta-regression. Meanwhile, for endpoints other than PASI 60 and continuous PASI score, additional sensitivity analysis, and subgroup analysis were not performed because the heterogeneity of the pooled results was low, or the number of included studies was very small. Considerable heterogeneity was found in the synthesis of trial data using PASI 60 and continuous PASI score outcomes, with I 2 72% and 95%, respectively. In the drapery plot, there were also studies that appeared to be outliers. Accordingly, sensitivity analysis was performed according to the leave-one-out approach to determine whether a specific study corresponding to these outliers was the cause of heterogeneity for the above two results. As a result of the sensitivity analysis, as shown in , each omission for all individual studies did not have a noteworthy effect on heterogeneity change ( A,B). Through sensitivity analysis, it was confirmed that outliers in individual studies did not affect heterogeneity. Hence, in order to identify other potential causes of heterogeneity, a meta-regression analysis was performed on moderators expected to influence the results. The moderators to be evaluated were “type of comparator”, “source of investigational medicine” and “sample size”, and they were applied to the meta-analysis findings of PASI 60 outcome and continuous PASI score, respectively, and analysis was performed. As a result of performing a meta-regression for PASI 60, the type of comparator that was confirmed as a variable had a statistically significant effect on the pooled results ( p = 0.0104; ), but the source of investigational medicine ( p = 0.6945; ) and sample size ( p = 0.8941; ) did not have a statistically significant effect. Neither the type of comparator ( p = 0.1902; ), the source of experimental medicine ( p = 0.5499; ), nor the sample size ( p = 0.4478; ) had a significant influence on the effect size of studies in a meta-regression of pooled results of continuous PASI score. Subgroup analysis indicated that the cause of heterogeneity may be related to the type of comparator . Subgroup analysis was not performed for other predictors that were not significant in meta-regression. Meanwhile, for endpoints other than PASI 60 and continuous PASI score, additional sensitivity analysis, and subgroup analysis were not performed because the heterogeneity of the pooled results was low, or the number of included studies was very small. Contour-enhanced funnel plot, Egger’s test, and Begg’s test were used to assess the potential publication bias of the primary outcomes in this meta-analysis. Asymmetric shapes were observed in the contour-enhanced funnel plots for all outcomes, suggesting potential bias ( A–C). There was no evidence of significant publication bias in both Egger’s test and Begg’s test for PASI 70 (Egger’s test: p = 0.3501; Begg’s test: p = 0.1396). The publication bias was statistically significant in Egger’s test for PASI 60, but not in Begg’s test (Egger’s test: p < 0.0001; Begg’s test: p = 0.8511). Publication bias of continuous PASI score was also significant in Egger’s test, but no significant bias was confirmed in Begg’s test (Egger’s test: p = 0.0027; Begg’s test: p = 0.1038). Overall, there may be a risk of potential publication bias, but it is difficult to say that such findings have been confirmed very clearly. Although no unequivocal evidence showing publication bias was found in the above investigation, the risk of potential publication bias could not be fully eliminated. In the comparison between EAHM and CM, the overall quality of evidence according to all outcome measures was very low to moderate. The results of the GRADE assessment are presented in . 3.9.1. Detailed Information and Distribution of EAHM Ingredients A total of 137 herbs were employed as component materials of the test EAHM in the 56 clinical trials covered in this review. Detailed information on individual EAHM components is summarized in . The following are 16 herbs that were prescribed with a high frequency in more than 20% of the studies included in this review: Rehmanniae Radix Recens; Salviae Miltiorrhizae Radix; Glycyrrhizae Radix et Rhizoma; Moutan Radicis Cortex; Lithospermi Radix; Smilacis Rhizoma; Radix Paeoniae Rubra; Dictamni Radicis Cortex; Imperatae Rhizoma; Hedyotidis Herba; Isatidis Radix; Lonicerae Flos; Sophorae Flos; Scutellariae Radix; Forsythiae Fructus; Spatholobi Caulis. The relative frequencies of these top 16 herbal materials ranged from 21.43% to a maximum of 69.64%. In terms of herb properties, all thirteen herbs, with the exception of three, were classed as cold and had the highest proportion, two herbs were neutral, and one herb had a warm property. Herbal flavors could be classed as bitter or sweet; however, bitter herbs accounted for a bigger part of the total, with nine herbs. Hence, the specific efficacy that clinicians consider when prescribing EAHM is expressed as summary information called the “action category”. The action categories of the 16 high-frequency herbs mentioned above were all classified as “heat-clearing” except for 1. shows the classification information for 16 herbs, including frequency distribution, property, taste, and action category. 3.9.2. Hierarchical Agglomerative Clustering The characters of the top 16 high-frequency herbal materials were investigated using the hierarchical agglomerative cluster method. Through this analysis, pharmacological trends of core EAHMs used in the treatment of inflammatory skin lesions in psoriasis can be identified. The core herbs in this study may be separated into three modules as a result of classification based on the frequency of use and features of individual herbs. The results of classifying herbs are shown in . 3.9.3. Social Network Analysis Social network analysis was used to confirm the mutual relationship between 16 herbs used frequently for inflammatory skin lesions of psoriasis and to identify core materials showing higher centrality in this interrelationship. As a result of graphically expressing the network between each herb, it was found that they are all closely connected, as shown in . In the calculation of eigenvector centrality to measure the prestige centrality of individual herbs, Sophorae Flos and Scutellariae Radix were 0.0593, and all other 14 herbal materials were 0.0630. According to this, the centrality of 16 high-frequency herbs used in more than 20% of trials was generally at a similar level, and it could be interpreted that they were considered closely related to each other in their use in EAHM prescription for psoriasis. A total of 137 herbs were employed as component materials of the test EAHM in the 56 clinical trials covered in this review. Detailed information on individual EAHM components is summarized in . The following are 16 herbs that were prescribed with a high frequency in more than 20% of the studies included in this review: Rehmanniae Radix Recens; Salviae Miltiorrhizae Radix; Glycyrrhizae Radix et Rhizoma; Moutan Radicis Cortex; Lithospermi Radix; Smilacis Rhizoma; Radix Paeoniae Rubra; Dictamni Radicis Cortex; Imperatae Rhizoma; Hedyotidis Herba; Isatidis Radix; Lonicerae Flos; Sophorae Flos; Scutellariae Radix; Forsythiae Fructus; Spatholobi Caulis. The relative frequencies of these top 16 herbal materials ranged from 21.43% to a maximum of 69.64%. In terms of herb properties, all thirteen herbs, with the exception of three, were classed as cold and had the highest proportion, two herbs were neutral, and one herb had a warm property. Herbal flavors could be classed as bitter or sweet; however, bitter herbs accounted for a bigger part of the total, with nine herbs. Hence, the specific efficacy that clinicians consider when prescribing EAHM is expressed as summary information called the “action category”. The action categories of the 16 high-frequency herbs mentioned above were all classified as “heat-clearing” except for 1. shows the classification information for 16 herbs, including frequency distribution, property, taste, and action category. The characters of the top 16 high-frequency herbal materials were investigated using the hierarchical agglomerative cluster method. Through this analysis, pharmacological trends of core EAHMs used in the treatment of inflammatory skin lesions in psoriasis can be identified. The core herbs in this study may be separated into three modules as a result of classification based on the frequency of use and features of individual herbs. The results of classifying herbs are shown in . Social network analysis was used to confirm the mutual relationship between 16 herbs used frequently for inflammatory skin lesions of psoriasis and to identify core materials showing higher centrality in this interrelationship. As a result of graphically expressing the network between each herb, it was found that they are all closely connected, as shown in . In the calculation of eigenvector centrality to measure the prestige centrality of individual herbs, Sophorae Flos and Scutellariae Radix were 0.0593, and all other 14 herbal materials were 0.0630. According to this, the centrality of 16 high-frequency herbs used in more than 20% of trials was generally at a similar level, and it could be interpreted that they were considered closely related to each other in their use in EAHM prescription for psoriasis. 4.1. Summary of the Main Finding Through the above analysis, our meta-analysis results suggest that oral EAHM is effective in improving symptoms of psoriasis. Overall, in the clinical trials included in this study, EAHM as monotherapy showed superior skin manifestation improvement in psoriasis compared to placebo and CM active controls in PASI 60, PASI 70, and continuous PASI indexes. At the same time, EAHM showed a superior or similar level of an effect to CM on the inflammatory findings of psoriasis in indicators such as IL-17, IL-23, and TNF-α, and also showed positive results on the quality of life in psoriasis. On top of that, patients treated that EAHM were more likely to experience less incidence rate of AEs. In this review, 16 high-frequency materials were derived through separate data mining of the collected herbal prescription information. Most of these herbs showed a clear tendency of property cold and action category “heat-clearing”, and it was found that all herbal materials were used with close correlation within the EAHM prescription. 4.2. Strength and Implications of Clinical Practice The strength of this study is that we focused on the efficacy and safety of EAHM by the oral route of administration and as monotherapy alone. Since the efficacy that can be confirmed through clinical studies on combined therapy is an add-on effect, it should be viewed as essentially different from the efficacy of monotherapy of the intervention. On the other hand, even for materials with the same pharmacological effect, the fact that pharmacokinetics will vary depending on the administration route is no exception for natural products . Recently, as the scope of research on pharmaceuticals based on natural sources continues to expand, the development of inhalation aerosol or injections is being actively carried out depending on the disease, as well as being used as external preparations such as ointment or fumigation . Therefore, in the design of future EAHM studies, a clear definition of the administration route is bound to be a more important requirement. This study was aimed at deriving hypotheses related to candidate materials and indications for oral drugs beyond a simple meta-analysis, and there is no dispute that the route of administration and the conditions of monotherapy are important. Evidence in this study derived according to the above scope suggests that oral administration of EAHM monotherapy is a useful option for inflammatory skin lesions management in psoriasis. The primary finding of this study is that the response rate and severity of PASI can be significantly improved. In addition, the improvement effect of various inflammation-related outcomes and DLQI is also a valuable finding in this study. These results are more meaningful in that they are consistent with several previous reports . Therefore, administration of EAHM may be attempted as an indication for skin damage accompanied by inflammation in psoriasis patients. It seems reasonable to use EAHM for patients who show low compliance or do not respond to conventional CM treatment. Another important finding to consider is that when EAHM is used, the incidence of AEs is significantly reduced. Despite the need for systemic treatment through oral agents, it is worthwhile to apply EAHM monotherapy as an alternative to patients whose side effects of CM are too pronounced. Further analysis of the EAHM prescription data revealed that herb materials with specific properties were used frequently for psoriasis. Accordingly, the commonly prescribed core herbal material of this review and their close interrelationships information can help in the selection and combination of the appropriate herb when constructing customized EAHM formulations for individual patients. 4.3. Implications of Core Material Exploration For the effective indications of EAHM for psoriasis revealed in the above discussion to be linked to the development of new drugs, further exploration of mechanisms and key materials is required. In this process, two important characteristics of EAHM must be considered first. One of them is related to the diagnostic method of East Asian medicine, which separately classifies the tendency to show systemic syndromes in addition to the patient’s biomedical symptoms and pathology . Such a diagnostic method that can administer customized prescriptions for the same disease is called “pattern identification” or “syndrome differentiation”. The properties and action categories assigned to individual EAHM herbs represent therapeutic targets according to this diagnosis . Specific EAHM indications have been primarily differentiated using the notions of “cold syndrome” and “hot syndrome,” and medications with “hot property/cold property” have been administered in response. For example, when a patient diagnosed with psoriasis complains of inflammatory skin symptoms along with physical findings such as fever, sweating, and thirst, it can be subdivided into hot syndrome of a psoriasis patient. EAHM materials that can effectively alleviate the accompanying systemic findings of this type of ‘hot syndrome’ are classified as cold properties. Conversely, EAHMs that can control cold syndrome are generally classified as hot property . Recent studies exploring this topic at the molecular mechanism level have shown that EAHM, classified as a hot property, is implicated in pathways that include neurotransmitter reuptake, cold-induced thermogenesis, blood pressure regulation, and adrenergic receptor signaling. In the case of cold property, there are reports that the target gene is related to the steroid pathway. As a consequence, the hot/cold properties of EAHM were presumed to be the major factors in this study, implying distinct signals and mechanisms of action, which were incorporated in the analysis . Most of the 16 high-frequency core herbs identified in this study were materials that exerted “clearing heat” action based on the “cold” property, and cluster analysis also confirmed that many herbs can be clustered with similar properties. This implies more information than simply that “clearing heat herb” is frequently used to manage inflammatory skin symptoms of psoriasis. According to previous studies, herbs exhibiting “clearing heat” action among EAHM are known to exhibit various anti-inflammatory and antiviral effects on patients with the so-called “heat” symptom pattern . Hence, a more recent study revealed that “medicinal herbs of clearing heat” had multiple anti-inflammatory activities compared to herbs belonging to other action categories . As summarized in , the pharmacological activity of the core herbs in this study is consistent with the knowledge in previous studies in that they correspond to anti-inflammatory and immune-modulating actions by various pathways. Therefore, the clinical efficacy of EAHM on psoriasis observed in this review appears to be strongly related to the complex anti-inflammatory mechanism exerted by herbs belonging to the “clearing heat” category. At the same time, in the future EAHM drug discovery related to psoriasis, it is expected that drugs corresponding to the above-discussed categories can be considered as preferred candidate materials. On the other hand, another characteristic of EAHM that should be considered is the synergistic effect exerted through multi-compound action against the multi-target . As can be seen from the data in this review, EAHM is usually administered in the form of a polyherbal formulation. This formulating chemical compound of EAHM not only produces a better synergistic effect, but also exerts an effect on the complex underlying mechanism of various diseases by reducing the side effects of individual drugs . The main prescription principle of EAHM that makes this possible is expressed as “Gun-Shin-Jwa-Sa” (King-Retainer-Officer-Messenger in English words) . In this approach, herbs responsible for the main effect are placed in a higher dose ratio at the positions of “Gun” and “Shin”, while herbs that lessen medication side effects or boost synergy are placed in relatively small doses at the positions of “Jwa” and “Sa”. Through this, an appropriately composed herbal combination can be expected to have amplified efficacy compared to that of a single herb. For example, the EAHM formula composed of only Sophorae Flos and Lonicerae Japonicae Flos, the high-frequency materials in this study, reprograms the immune microenvironment and exhibits anti-melanoma effects based on the mechanism that inhibits STAT3 signaling in B16F10 melanoma-bearing mice . Meanwhile, Salvia Miltiorrhizae Radix, another core herb, and Notoginseng Radix et Rhizoma and 6:4 ratio were combined, and synergistic interaction was observed with respect to the protective effect of endothelial cells . These previous studies suggest that rather than predicting the effect of EAHM only on the pharmacological activity of a single herb, considering the interaction between multiple materials together can bring better therapeutic outcomes. From this point of view, as a result of examining the relationship between the core herbs through social network analysis, close connectivity between all materials and an almost uniform level of betweenness centrality were observed. This finding supports the assumption that in the EAHM prescription of this study, the core herbs exerted an effect not only on the effect mechanism of individual herbs but also on the prescription composition principle according to the “Gun-Shin-Jwa-Sa” was considered by the application method. Therefore, tracking the synergy effect derived from the combination of key herbs and searching for the optimal herbal combination that can maximize this synergistic interaction can be a goal in follow-up studies for drug candidate proposals. 4.4. Limitations and Perspectives To use the results and hypotheses derived from this study for clinical decision-making or follow-up research, it is necessary to understand the following limitations. First, as a result of performing a meta-analysis, a significant level of heterogeneity was observed. This suggests that it is difficult to accept that all EAHM prescriptions included in this study are useful for psoriasis. To investigate the cause of heterogeneity in detail, in this review, both outlier sensitivity analysis on individual trials and meta-regression on pre-specified moderators were performed. As a result, in the case of PASI 60, it was found that the type of CM adopted as an active control could be the cause, but in the case of a continuous PASI score showing a higher heterogeneity, a specific cause could not be identified. After excluding other causes, it could be presumed that the high heterogeneity was due to the very diverse composition and dosage of the EAHM prescription in each included trial. A similar problem is often seen in other meta-analyses of EAHM. This is due to EAHM’s prescription principle, which requires personalized prescription of herbal materials, and is highly likely to be repeated in future studies of the same design. Additional analysis of herbal material using data mining was performed as a way to overcome the essential limitations due to the characteristics of the intervention itself. If it is not a study that determines only natural products produced by pharmaceutical companies as the scope of analysis, it is thought that a separate analysis of EAHM prescriptions and herbal constituents by various methods in a systematic review related to EAHM in the future will be essential. Second, the commonly prescribed herb-related results derived from this review merely narrowed the scope of hypotheses about core materials through descriptive statistics and unsupervised learning techniques. Therefore, verification of whether the identified core herbs exert a better effect on psoriasis by themselves and whether actual synergy is created from the observed close correlation should be conducted through separate follow-up research. Based on the hypothesis presented in this study, it is thought that useful candidates can be further narrowed by comparing the effects of each EAHM through the network meta-analysis or predicting the mechanism using the network pharmacology technique together with the laboratory research. Third, as the primary outcome in this study, PASI 60 and PASI 70, which were adopted in the most inclusive studies, were selected as a relatively validated endpoint. However, considering that the evaluation instrument used as international standards in recent years is PASI 75 or PASI 90, it is difficult to completely rule out the bias in the results of this study as well. Therefore, in order to more objectively evaluate the efficacy of EAHM compared to placebo control as well as an active control, studies using widely used standard endpoints should be conducted. Fourth, most of the clinical trials included in this review lack pre-registered protocols, do not adopt double-blind methodologies and do not describe detailed randomization procedures. This shows that a number of studies cannot dissipate qualitative concerns, which will also affect the reliability of the results. Although the quantitative growth of EAHM-related evidence over the past decade has been remarkable, more clinical trials are still needed to ensure qualitative progress. Finally, a limitation is that all trials included in this study were conducted in China. In the process of collecting the literature for systematic review, there was no language restriction, and both databases in East Asia, as well as English databases, were searched, but only studies conducted in China met the inclusion criteria. However, as mentioned above, EAHM is widely used as a drug with a common material throughout East Asia, and the academic theory that is the principle of the application is also shared. Therefore, it is considered that the imbalance of trial execution regions is only due to differences in the medical research environment in each country. Therefore, it is expected that this difference can be overcome by continuously conducting studies such as this review on the usefulness of EAHM. Through the above analysis, our meta-analysis results suggest that oral EAHM is effective in improving symptoms of psoriasis. Overall, in the clinical trials included in this study, EAHM as monotherapy showed superior skin manifestation improvement in psoriasis compared to placebo and CM active controls in PASI 60, PASI 70, and continuous PASI indexes. At the same time, EAHM showed a superior or similar level of an effect to CM on the inflammatory findings of psoriasis in indicators such as IL-17, IL-23, and TNF-α, and also showed positive results on the quality of life in psoriasis. On top of that, patients treated that EAHM were more likely to experience less incidence rate of AEs. In this review, 16 high-frequency materials were derived through separate data mining of the collected herbal prescription information. Most of these herbs showed a clear tendency of property cold and action category “heat-clearing”, and it was found that all herbal materials were used with close correlation within the EAHM prescription. The strength of this study is that we focused on the efficacy and safety of EAHM by the oral route of administration and as monotherapy alone. Since the efficacy that can be confirmed through clinical studies on combined therapy is an add-on effect, it should be viewed as essentially different from the efficacy of monotherapy of the intervention. On the other hand, even for materials with the same pharmacological effect, the fact that pharmacokinetics will vary depending on the administration route is no exception for natural products . Recently, as the scope of research on pharmaceuticals based on natural sources continues to expand, the development of inhalation aerosol or injections is being actively carried out depending on the disease, as well as being used as external preparations such as ointment or fumigation . Therefore, in the design of future EAHM studies, a clear definition of the administration route is bound to be a more important requirement. This study was aimed at deriving hypotheses related to candidate materials and indications for oral drugs beyond a simple meta-analysis, and there is no dispute that the route of administration and the conditions of monotherapy are important. Evidence in this study derived according to the above scope suggests that oral administration of EAHM monotherapy is a useful option for inflammatory skin lesions management in psoriasis. The primary finding of this study is that the response rate and severity of PASI can be significantly improved. In addition, the improvement effect of various inflammation-related outcomes and DLQI is also a valuable finding in this study. These results are more meaningful in that they are consistent with several previous reports . Therefore, administration of EAHM may be attempted as an indication for skin damage accompanied by inflammation in psoriasis patients. It seems reasonable to use EAHM for patients who show low compliance or do not respond to conventional CM treatment. Another important finding to consider is that when EAHM is used, the incidence of AEs is significantly reduced. Despite the need for systemic treatment through oral agents, it is worthwhile to apply EAHM monotherapy as an alternative to patients whose side effects of CM are too pronounced. Further analysis of the EAHM prescription data revealed that herb materials with specific properties were used frequently for psoriasis. Accordingly, the commonly prescribed core herbal material of this review and their close interrelationships information can help in the selection and combination of the appropriate herb when constructing customized EAHM formulations for individual patients. For the effective indications of EAHM for psoriasis revealed in the above discussion to be linked to the development of new drugs, further exploration of mechanisms and key materials is required. In this process, two important characteristics of EAHM must be considered first. One of them is related to the diagnostic method of East Asian medicine, which separately classifies the tendency to show systemic syndromes in addition to the patient’s biomedical symptoms and pathology . Such a diagnostic method that can administer customized prescriptions for the same disease is called “pattern identification” or “syndrome differentiation”. The properties and action categories assigned to individual EAHM herbs represent therapeutic targets according to this diagnosis . Specific EAHM indications have been primarily differentiated using the notions of “cold syndrome” and “hot syndrome,” and medications with “hot property/cold property” have been administered in response. For example, when a patient diagnosed with psoriasis complains of inflammatory skin symptoms along with physical findings such as fever, sweating, and thirst, it can be subdivided into hot syndrome of a psoriasis patient. EAHM materials that can effectively alleviate the accompanying systemic findings of this type of ‘hot syndrome’ are classified as cold properties. Conversely, EAHMs that can control cold syndrome are generally classified as hot property . Recent studies exploring this topic at the molecular mechanism level have shown that EAHM, classified as a hot property, is implicated in pathways that include neurotransmitter reuptake, cold-induced thermogenesis, blood pressure regulation, and adrenergic receptor signaling. In the case of cold property, there are reports that the target gene is related to the steroid pathway. As a consequence, the hot/cold properties of EAHM were presumed to be the major factors in this study, implying distinct signals and mechanisms of action, which were incorporated in the analysis . Most of the 16 high-frequency core herbs identified in this study were materials that exerted “clearing heat” action based on the “cold” property, and cluster analysis also confirmed that many herbs can be clustered with similar properties. This implies more information than simply that “clearing heat herb” is frequently used to manage inflammatory skin symptoms of psoriasis. According to previous studies, herbs exhibiting “clearing heat” action among EAHM are known to exhibit various anti-inflammatory and antiviral effects on patients with the so-called “heat” symptom pattern . Hence, a more recent study revealed that “medicinal herbs of clearing heat” had multiple anti-inflammatory activities compared to herbs belonging to other action categories . As summarized in , the pharmacological activity of the core herbs in this study is consistent with the knowledge in previous studies in that they correspond to anti-inflammatory and immune-modulating actions by various pathways. Therefore, the clinical efficacy of EAHM on psoriasis observed in this review appears to be strongly related to the complex anti-inflammatory mechanism exerted by herbs belonging to the “clearing heat” category. At the same time, in the future EAHM drug discovery related to psoriasis, it is expected that drugs corresponding to the above-discussed categories can be considered as preferred candidate materials. On the other hand, another characteristic of EAHM that should be considered is the synergistic effect exerted through multi-compound action against the multi-target . As can be seen from the data in this review, EAHM is usually administered in the form of a polyherbal formulation. This formulating chemical compound of EAHM not only produces a better synergistic effect, but also exerts an effect on the complex underlying mechanism of various diseases by reducing the side effects of individual drugs . The main prescription principle of EAHM that makes this possible is expressed as “Gun-Shin-Jwa-Sa” (King-Retainer-Officer-Messenger in English words) . In this approach, herbs responsible for the main effect are placed in a higher dose ratio at the positions of “Gun” and “Shin”, while herbs that lessen medication side effects or boost synergy are placed in relatively small doses at the positions of “Jwa” and “Sa”. Through this, an appropriately composed herbal combination can be expected to have amplified efficacy compared to that of a single herb. For example, the EAHM formula composed of only Sophorae Flos and Lonicerae Japonicae Flos, the high-frequency materials in this study, reprograms the immune microenvironment and exhibits anti-melanoma effects based on the mechanism that inhibits STAT3 signaling in B16F10 melanoma-bearing mice . Meanwhile, Salvia Miltiorrhizae Radix, another core herb, and Notoginseng Radix et Rhizoma and 6:4 ratio were combined, and synergistic interaction was observed with respect to the protective effect of endothelial cells . These previous studies suggest that rather than predicting the effect of EAHM only on the pharmacological activity of a single herb, considering the interaction between multiple materials together can bring better therapeutic outcomes. From this point of view, as a result of examining the relationship between the core herbs through social network analysis, close connectivity between all materials and an almost uniform level of betweenness centrality were observed. This finding supports the assumption that in the EAHM prescription of this study, the core herbs exerted an effect not only on the effect mechanism of individual herbs but also on the prescription composition principle according to the “Gun-Shin-Jwa-Sa” was considered by the application method. Therefore, tracking the synergy effect derived from the combination of key herbs and searching for the optimal herbal combination that can maximize this synergistic interaction can be a goal in follow-up studies for drug candidate proposals. To use the results and hypotheses derived from this study for clinical decision-making or follow-up research, it is necessary to understand the following limitations. First, as a result of performing a meta-analysis, a significant level of heterogeneity was observed. This suggests that it is difficult to accept that all EAHM prescriptions included in this study are useful for psoriasis. To investigate the cause of heterogeneity in detail, in this review, both outlier sensitivity analysis on individual trials and meta-regression on pre-specified moderators were performed. As a result, in the case of PASI 60, it was found that the type of CM adopted as an active control could be the cause, but in the case of a continuous PASI score showing a higher heterogeneity, a specific cause could not be identified. After excluding other causes, it could be presumed that the high heterogeneity was due to the very diverse composition and dosage of the EAHM prescription in each included trial. A similar problem is often seen in other meta-analyses of EAHM. This is due to EAHM’s prescription principle, which requires personalized prescription of herbal materials, and is highly likely to be repeated in future studies of the same design. Additional analysis of herbal material using data mining was performed as a way to overcome the essential limitations due to the characteristics of the intervention itself. If it is not a study that determines only natural products produced by pharmaceutical companies as the scope of analysis, it is thought that a separate analysis of EAHM prescriptions and herbal constituents by various methods in a systematic review related to EAHM in the future will be essential. Second, the commonly prescribed herb-related results derived from this review merely narrowed the scope of hypotheses about core materials through descriptive statistics and unsupervised learning techniques. Therefore, verification of whether the identified core herbs exert a better effect on psoriasis by themselves and whether actual synergy is created from the observed close correlation should be conducted through separate follow-up research. Based on the hypothesis presented in this study, it is thought that useful candidates can be further narrowed by comparing the effects of each EAHM through the network meta-analysis or predicting the mechanism using the network pharmacology technique together with the laboratory research. Third, as the primary outcome in this study, PASI 60 and PASI 70, which were adopted in the most inclusive studies, were selected as a relatively validated endpoint. However, considering that the evaluation instrument used as international standards in recent years is PASI 75 or PASI 90, it is difficult to completely rule out the bias in the results of this study as well. Therefore, in order to more objectively evaluate the efficacy of EAHM compared to placebo control as well as an active control, studies using widely used standard endpoints should be conducted. Fourth, most of the clinical trials included in this review lack pre-registered protocols, do not adopt double-blind methodologies and do not describe detailed randomization procedures. This shows that a number of studies cannot dissipate qualitative concerns, which will also affect the reliability of the results. Although the quantitative growth of EAHM-related evidence over the past decade has been remarkable, more clinical trials are still needed to ensure qualitative progress. Finally, a limitation is that all trials included in this study were conducted in China. In the process of collecting the literature for systematic review, there was no language restriction, and both databases in East Asia, as well as English databases, were searched, but only studies conducted in China met the inclusion criteria. However, as mentioned above, EAHM is widely used as a drug with a common material throughout East Asia, and the academic theory that is the principle of the application is also shared. Therefore, it is considered that the imbalance of trial execution regions is only due to differences in the medical research environment in each country. Therefore, it is expected that this difference can be overcome by continuously conducting studies such as this review on the usefulness of EAHM. This systematic review supports that oral EAHM monotherapy can be a useful treatment for inflammatory skin lesions in psoriasis. Meta-analysis showed that EAHM had superior effects compared to the control group in PASI 70, PASI 60, continuous PASI score, IL-17, TNF-α, and DLQI of psoriasis patients. In addition, EAHM decreased the incidence rate of adverse events compared to the CM control group. In other words, it is thought that EAHM can positively contribute to skin symptoms, inflammatory status, quality of life, and drug adherence in psoriasis patients. Further analysis of the EAHM prescription identified 16 high-frequency key materials: Rehmanniae Radix Recens; Salviae Miltiorrhizae Radix; Glycyrrhizae Radix et Rhizoma; Moutan Radicis Cortex; Lithospermi Radix; Smilacis Rhizoma; Radix Paeoniae Rubra; Dictamni Radicis Cortex; Imperatae Rhizoma; Hedyotidis Herba; Isatidis Radix; Lonicerae Flos; Sophorae Flos; Scutellariae Radix; Forsythiae Fructus; Spahalobi Caulis. They are generally thought to show multipath anti-inflammatory activity based on “heat clearing” action and show close connectivity. Therefore, in drug discovery related to this topic in the future, it is expected that the maximization of the anti-inflammatory synergy effect by the combination of EAHM materials belonging to the “heat clearing” category can be treated as a useful research hypothesis. Despite the above results, concerns about the quality of the included studies and various biases were detected. To reach a firmer conclusion, additional clinical trials that include a multicenter design, a double-blind method, and an outcome with more validity in the design will need to be conducted in the future.
Engineering organs, hopes and hybridity: considerations on the social potentialities of xenotransplantation
a2188cf5-9dea-4081-bf17-49131659b1db
11877069
Surgical Procedures, Operative[mh]
The world has witnessed several efforts in xenotransplantation involving the implantation of genetically engineered organs derived from pigs into human patients. Specifically, these include two heart and two kidney surgeries in the USA and a liver transplant in China. The heart transplantation patients survived 6 and 8 weeks, the kidney patients 2 months. For the liver transplantation patient from China, there is no news available. Xenotransplantation has also seen initial exemplifications as a subject of policymaking. It marked the first instance in which the precautionary principle was advised in the realm of bioethics . Additionally, xenotransplantation has prompted public consultation projects in several countries as well as an European Union-wide research project on participatory technology assessment . In social research, xenotransplantation serves as an instructive example to illustrate forms of governance and control such as ‘pre-emptive’ biopolitics and various social relations (including those associated with patienthood, animal ethics, biocapital and expertise). Xenotransplantation is not a new science and there are documented experiments dating back to at least the 1600s with an intensification of sporadic attempts in the USA especially during the mid-to-late 20th century involving non-human primates as source animals and human patients. Most recently, novel scientific insights have emerged with the development of the gene editing system CRISPR-Cas (Clustered Regularly Interspaced Short Palindromic Repeats), a tool that scientists working in the field of xenotransplantation praise for its technoscientific capacities to overcome previous hurdles like porcine endogenous retroviruses alongside a host of other transspecies immunological challenges. In addition to the recent pig-to-human transplantations, experiments with genetically engineered pig organs have been conducted on brain-dead individuals in the USA and in China, attempting to supplement the preclinical non-human primate model . Yet despite these developments and changes in the scientific and clinical landscape of xenotransplantation, many social and ethical aspects persist. Considering these developments, we deemed it timely to convene social science and humanities scholars—working in sociology, ethnology, sociocultural and medical anthropology, American studies and science, technology and society studies—dedicated to researching various angles, fields and dimensions of xenotransplantation. Some of us have been working in this field since the 1990s. The conference ‘XenoSocial: Examining the Social Implications of Xenotransplantation’, held in Tutzing, Germany, from 30 November 2023 to 2 December 2023, brought us together to examine the issues of xenotransplantation subjectivities and human–animal relations, public perceptions of xenotransplantation and science-public interaction as well as regulation and governance. It is not feasible to detail all the results and insights on the social implications of xenotransplantation here. Instead, we would like to highlight the issues that were discussed at the conference before delving into the social issues and questions we believe are necessary for future research endeavours. The biomedical and wider ‘sociotechnical imaginary’ —the collectively held visions of (supposedly) desirable futures shaped by the relationship across social, technological, and policy domains—regarding its targets and respective solutions has changed over the last few decades, giving rise to new research strategies and immunitary paradigms of which the production of human-animal hybridity emerges as a potentially viable and acceptable technology in saving human lives and alleviating the shortage of human donor organs. Creating interspecies chimaeras by means of inducing human pluripotent stem cells into pig blastocysts or embryos that are eventually able to render individually tailored organs for transplant patients is one of those ideas pursued in experimental research . We observe a development that increasingly operates under the distinct logic of pre-emption which appears to accompany xenotransplantation as an entrepreneurial project. This logic involves pre-emptive medical visions of transplanting organs before they show signs of failure . Importantly, the increasingly commercially driven nature of the xenotransplantation project shows signs of a biopolitics of pre-emption —a form of governance that aims to anticipate the future by changing it (through the use of biotechnologies)—guided by a vision directed toward a horizon fraught with potential health or security crises. In this respect, there are some companies that primarily invest in potential future revenues on biocapital—venture biocapital, so to speak—thereby intensifying the pressure to produce xenotransplantation results with less or little regard for non-economic factors (for example, social acceptability, animal ethics or sustainability). On the regulatory side of these processes, we observe the emergence of new forms of regulations such as hybrid regulatory agencies—or ‘institutional hybrids’ —taking shape, which concentrate regulatory powers and reduce diversity in the scientific performances and potentialities of xenotransplantation . We observe a strong inclination within the xenotransplantation enterprise actively to seek public acceptance as a means to socially legitimise its research and medical interventions. The interface and realm of interactions between science and the public take the form of particular ‘ethno-epistemic assemblages’ , increasingly composed of companies advocating for the agenda of xenotransplantation. Contrary to past expectations based on the deficit model of public understanding of science where public approval of xenotransplantation was presumed once citizens were informed, we have seen varied outcomes from public consultation efforts. In Canada , Australia and the Netherlands , moratoriums were endorsed. In New Zealand , Switzerland and Germany , the outcomes were in favour of the continuation of xenotransplantation research. However, in Switzerland and the Netherlands, the results of the participation formats were not considered by lawmakers even though the process was initiated by authorities , which undermines citizen participation. While the opinions of various ‘publics-in-particular’ in these public consultations are valuable, they cannot be regarded as reflective of public acceptance across the entire population, nor should they be dismissed. In terms of the non-human animals involved, the ethical focus is often restricted to comparing the relatively small number of pigs used for research with the much larger numbers of animals farmed for meat production. While using animals as a source of ‘spare parts’ for humans may align with an anthropocentric worldview, it is essential to insist that the frequently made assumption—that using animals to save human lives is acceptable when people also consume them for pleasure—is not uncontested . From a biocentric perspective, which underlies animal rights activism, dietary choices such as vegetarianism and veganism, both of which are gaining prominence, particularly in countries where xenotransplantation research is conducted, may influence public perception and acceptance. It is crucial to note that using animals in xenotransplantation does not equate the life of one pig to the life of one human . In the production process of genetically engineered pigs, less than 1% mature into animals , and on the path to clinical implementation in humans, a range of non-human primates, who have been used as proxies, will have perished during preclinical trials. In addition, for porcine islet cell xenotransplants, 10 adult or more than 90 juvenile pigs would be required for one person . Clearly, the number of animals needed for experimental and clinical xenotransplantation is significant. The acceptability of animal use for human purposes constitutes one aspect of the ‘xenotransplantation paradox’ which involves the simultaneous emphasis on sameness and difference between humans and animals in xenotransplantation discourse . The emphasis on similarity across species, as opposed to differences, serves to underscore the feasibility of xenotransplantation. From this framing, physiological differences are not seen as insurmountable barriers. For example, the heart is held to be merely a pump that can function in biological organisms regardless of their species-bound specificity. With the latest focus on gene editing, the legitimising discourses surrounding xenotransplantation have shifted towards a rhetoric of molecularisation , constructing the argument that it is not only on a physiological level of whole-organ function that pig-to-human xenotransplantation is feasible but also on a molecular one. In this sense, arguments hinge on the notion that scientists can use gene editing technologies to produce porcine organs that exhibit human proteins, which, when transplanted into human bodies, are more compatible with the human immune system on a molecular level due to their transgenic composition and thus improve potential xenotransplantation success. As social science researchers, we acknowledge our role in shaping the social reality of xenotransplantation. With distinct perceptions, interests and normative expectations, we consider it crucial for future developments in xenotransplantation to be accompanied by social research that comprehensively addresses its conditions, implications and consequences. For the probable event of further xenotransplantation in the future, whether through additional ‘compassionate uses’ or initial clinical trials, it becomes imperative to prioritise the diversity of patient perspectives within the xenotransplantation scenario. This can be achieved particularly through interviews with xenotransplantation recipients as well as with patients who qualify as potential recipients through their diagnosis (such as type 1 diabetes, Parkinson’s disease, cancer) and as (future) research subjects. On what we know Surveys seem to be a wanting instrument for examining a complex topic like xenotransplantation, as knowledge about it cannot be assumed . It is also difficult to assess the acceptance of xenotransplantation among people who are not in need of one themselves, especially as xenotransplantation tends to evoke strong affective reactions. Consequently, there has been quite a range of differing acceptance rates regarding xenotransplantation, both in public, and patient attitudes studies, with the survey outcomes depending—among other factors—on the information provided or the wording used in the survey , for example, when juxtaposed with risk perception . Noteworthy in this regard is that options such as human or artificial organs are preferred among patients and other population groups . This preference has been interpreted as strong affective reactions . However, qualitative empirical research studies also indicate that xenotransplantation finds acceptance among patients, when no alternative options or treatments exist . Accordingly, health matters tend to take priority over ethical considerations, even regarding the use of animals , with the functionality of the transplant important to these patients regardless of the source of the organ . The actual risk involved is seen in not receiving a transplant . This is also reflected in paediatric xenotransplantation where parents of children with congenital heart disease are willing to consider xenotransplantation as a last resort, but are particularly concerned about the stigmatisation of and among children. Nevertheless, social pressure and religious prohibitions are reported to influence patients’ willingness to accept xenotransplants . Research with actual xenotransplantation recipients is currently minimal but shows that survival and, among adolescents, autonomy take priority over other concerns. In this regard, various justifications for using animals are constructed , leading to entanglements of identity issues and normalisation . All these patient groups vary in their diagnosis and do not offer a uniform perspective, hence our calling for future social science research. Additionally, insights from studies on allotransplantation need consideration, providing potential areas of concern for patients . This includes concerns about stigmatisation (as also witnessed with xenotransplantation , discomfort with being transplanted, worries regarding one’s subjectivity, identity and embodiment, such as changes in personality or character, considerations of bodily integrity and the symbolic meaning of organs and unrecognised physical, emotional and existential suffering . Cultural perspectives and the deeply ingrained symbolic meanings associated with organs persist despite scientific explanations. Thinking within social classifications and normative categories is culturally embedded and not easily dismissed when contrasting it with physiological identities (eg, perceiving the heart solely as a pump regardless of species differences) . Specifically, fleshy organic parts are linked to the identity of their original bearers and the ‘alteration of what you are (in the material bodily sense) does affect who you are (subjectivity) in the case of organ transplantation’ ( , 161). These concerns are likely to arise, if not more so, in the context of xenotransplantation. We also expect consequences and implications of xenotransplantation in terms of social inequality. On the one hand, we consider the impact of socially unequal treatments in the current form of allotransplantation on minority groups. Throughout the history of allotransplantation, race has played a complicated role. It has both factored into equations of ‘deserving’ recipients and has troubled those who understand racial identification along the lines of in/authenticity and im/purity. The transplantation of organs across differentially raced bodies has elicited questions about the nature of race itself, from both biological and cultural perspectives . In the case of xenotransplantation, issues concerning race have the potential to expand in complicated ways. While some researchers hail xenotransplants as potentially superior to allotransplants , such an understanding fails to take into account social and cultural aversions towards violating the animal–human boundary. It is therefore not clear whether, if given a choice, patients would ever voluntarily opt for a non-human over a human organ. For minoritised groups who have historically been animalised through scientific rhetoric, the stakes of crossing this boundary are even higher . While the perspectives of minority groups on xenotransplantation have been examined occasionally (eg, assessed acceptance across different racial groups in the USA), the views of minorities across various social categories and in different countries require more systematic research. Moreover, we have also observed challenges in the patient selection process for ‘compassionate use’ (or ‘expanded access’) candidates for xenotransplantation, particularly in the USA, where requirements for psychosocial support and compliance are biased in terms of class, race and dis/ability. This leaves those patients in a particularly vulnerable position with no alternatives once experimental xenotransplantation is offered . On the other hand, the debate on global justice , especially concerning the unequal distribution of risks, has diminished with some even now endorsing xenotransplantation development in selected countries . Nevertheless, the unequal distribution of risks is still possible through the outsourcing of xenotransplantation experiments and trials to parts of the world where regulatory requirements may be less stringent or where the potential economic benefits of offering niche treatments to medical (xeno)tourists may be highly appealing . On what lies ahead That being said, we recognise that xenotransplantation may represent an opportunity for many patients to alleviate themselves from feelings of guilt towards donors and their relatives, as well as from the realisation that another human has given their life for them. Xenotransplantation raises expectations , including the hope to alleviate the organ shortage and the hope among chronically ill and dying patients for a prolonged life. This may lead them to opt for experimental treatments (a facet of this involves the necessary infringement on what is considered a patient’s autonomy . One of the challenges that patients and their relatives may face can be seen in the dual construction of the xenotransplanted body (viewed as sacred and profane, subjective and depersonalised, both similar and different from the ‘donor’ species; organs as essential and disposable, simultaneously vitalistic and mechanical). These complexities may pose a potential burden for those affected (potential and actual recipients) necessitating careful consideration and grappling with these intricate aspects. Many factors remain speculative, such as the potential impact of organ transplant commodification on society and the symbolic value of the organ as a gift. Companies producing genetically engineered pigs for xenotransplantation are already implementing different strategies for their ‘products’. Revivicor, which provided the porcine hearts for the first cardiac xenotransplants, and eGenesis, which respectively provided the first kidney, each implemented 10 or more genetic modifications to the source pigs. Meanwhile, scientists in Germany advocate that fewer genetic edits to the source pigs are a safer and more reproducible approach . These competing approaches have the potential to develop into national market strategies as companies work to develop different models of genetically modified pigs as new biocommodities. This could lead to challenges in decision-making options for xenotransplantation patients, and is likely to pose new regulatory challenges . Additionally, there is uncertainty regarding how the provision of xenotransplants will affect allotransplantation: Will it result in the anticipated surplus of organs or will it lead to a decrease in allotransplants as potential donors perceive a diminished need? This outcome hinges on whether xenotransplants prove to be on par, inferior or superior to allografts. However, the outcome also rests on whether potential allotransplant donors (cadaveric and living) will see human organ donation as necessary given the availability of animal sources. In addition, bereaved families who endure significant pain when agreeing to the organ transplantation of a deceased family member may decide against it if an alternative becomes available. This would inevitably reduce the availability of organs rather than increase them even if xenotransplantation is viable. Furthermore, it is uncertain whether hierarchies of patient accessibility to allo- and xenografts will arise (eg, as shaped by socioeconomic disparities or insured status). Operating at the extremes of the possible, xenotransplantation—besides creating human–animal hybrids that evoke various transhuman and posthuman imaginaries—also introduces a new type of social figure: The brain-dead individual who is sustained for the sole purpose of experimental xenotransplant trials. This individual is not merely an ‘organ donor’ but rather a body or corpse ‘donor’, representing a socio-technological-legal construct used as an alternative to non-human primates as an experimental research model. The consequences of this trajectory require careful consideration and further research, including various stakeholders and affected groups involved . Lastly, we anticipate ethical questions related to the role and use of animals, which some may perceive to be resolved to resurface as a matter of public opinion due to the current zeitgeist. Considering these concerns and recognising that social research often unveils facets and implications that might otherwise remain concealed, we deem it essential for the sake of facilitating better-informed and responsible social and technological developments to accompany these dynamics. This entails conducting research on various aspects, including the patient (and relatives’) perspective (consent, autonomy, selection process, identity and subjectivities), the construction of the animals involved, the forms of regulation, biopolitics and bioeconomy (including the competition between various differently engineered pigs/products) in play on a national and global scale, the knowledge constructed through xenotransplantation, public opinion and the various publics produced through xenotransplantation, the opinions of researchers, clinicians and other stakeholders and the interplay between allotransplantation and xenotransplantation, along with their dynamics and management among other relevant factors. By emphasising the importance of the social issues and challenges discussed in this article, we hope to raise awareness among medical staff, researchers and policymakers involved in xenotransplantation regarding these matters. Surveys seem to be a wanting instrument for examining a complex topic like xenotransplantation, as knowledge about it cannot be assumed . It is also difficult to assess the acceptance of xenotransplantation among people who are not in need of one themselves, especially as xenotransplantation tends to evoke strong affective reactions. Consequently, there has been quite a range of differing acceptance rates regarding xenotransplantation, both in public, and patient attitudes studies, with the survey outcomes depending—among other factors—on the information provided or the wording used in the survey , for example, when juxtaposed with risk perception . Noteworthy in this regard is that options such as human or artificial organs are preferred among patients and other population groups . This preference has been interpreted as strong affective reactions . However, qualitative empirical research studies also indicate that xenotransplantation finds acceptance among patients, when no alternative options or treatments exist . Accordingly, health matters tend to take priority over ethical considerations, even regarding the use of animals , with the functionality of the transplant important to these patients regardless of the source of the organ . The actual risk involved is seen in not receiving a transplant . This is also reflected in paediatric xenotransplantation where parents of children with congenital heart disease are willing to consider xenotransplantation as a last resort, but are particularly concerned about the stigmatisation of and among children. Nevertheless, social pressure and religious prohibitions are reported to influence patients’ willingness to accept xenotransplants . Research with actual xenotransplantation recipients is currently minimal but shows that survival and, among adolescents, autonomy take priority over other concerns. In this regard, various justifications for using animals are constructed , leading to entanglements of identity issues and normalisation . All these patient groups vary in their diagnosis and do not offer a uniform perspective, hence our calling for future social science research. Additionally, insights from studies on allotransplantation need consideration, providing potential areas of concern for patients . This includes concerns about stigmatisation (as also witnessed with xenotransplantation , discomfort with being transplanted, worries regarding one’s subjectivity, identity and embodiment, such as changes in personality or character, considerations of bodily integrity and the symbolic meaning of organs and unrecognised physical, emotional and existential suffering . Cultural perspectives and the deeply ingrained symbolic meanings associated with organs persist despite scientific explanations. Thinking within social classifications and normative categories is culturally embedded and not easily dismissed when contrasting it with physiological identities (eg, perceiving the heart solely as a pump regardless of species differences) . Specifically, fleshy organic parts are linked to the identity of their original bearers and the ‘alteration of what you are (in the material bodily sense) does affect who you are (subjectivity) in the case of organ transplantation’ ( , 161). These concerns are likely to arise, if not more so, in the context of xenotransplantation. We also expect consequences and implications of xenotransplantation in terms of social inequality. On the one hand, we consider the impact of socially unequal treatments in the current form of allotransplantation on minority groups. Throughout the history of allotransplantation, race has played a complicated role. It has both factored into equations of ‘deserving’ recipients and has troubled those who understand racial identification along the lines of in/authenticity and im/purity. The transplantation of organs across differentially raced bodies has elicited questions about the nature of race itself, from both biological and cultural perspectives . In the case of xenotransplantation, issues concerning race have the potential to expand in complicated ways. While some researchers hail xenotransplants as potentially superior to allotransplants , such an understanding fails to take into account social and cultural aversions towards violating the animal–human boundary. It is therefore not clear whether, if given a choice, patients would ever voluntarily opt for a non-human over a human organ. For minoritised groups who have historically been animalised through scientific rhetoric, the stakes of crossing this boundary are even higher . While the perspectives of minority groups on xenotransplantation have been examined occasionally (eg, assessed acceptance across different racial groups in the USA), the views of minorities across various social categories and in different countries require more systematic research. Moreover, we have also observed challenges in the patient selection process for ‘compassionate use’ (or ‘expanded access’) candidates for xenotransplantation, particularly in the USA, where requirements for psychosocial support and compliance are biased in terms of class, race and dis/ability. This leaves those patients in a particularly vulnerable position with no alternatives once experimental xenotransplantation is offered . On the other hand, the debate on global justice , especially concerning the unequal distribution of risks, has diminished with some even now endorsing xenotransplantation development in selected countries . Nevertheless, the unequal distribution of risks is still possible through the outsourcing of xenotransplantation experiments and trials to parts of the world where regulatory requirements may be less stringent or where the potential economic benefits of offering niche treatments to medical (xeno)tourists may be highly appealing . That being said, we recognise that xenotransplantation may represent an opportunity for many patients to alleviate themselves from feelings of guilt towards donors and their relatives, as well as from the realisation that another human has given their life for them. Xenotransplantation raises expectations , including the hope to alleviate the organ shortage and the hope among chronically ill and dying patients for a prolonged life. This may lead them to opt for experimental treatments (a facet of this involves the necessary infringement on what is considered a patient’s autonomy . One of the challenges that patients and their relatives may face can be seen in the dual construction of the xenotransplanted body (viewed as sacred and profane, subjective and depersonalised, both similar and different from the ‘donor’ species; organs as essential and disposable, simultaneously vitalistic and mechanical). These complexities may pose a potential burden for those affected (potential and actual recipients) necessitating careful consideration and grappling with these intricate aspects. Many factors remain speculative, such as the potential impact of organ transplant commodification on society and the symbolic value of the organ as a gift. Companies producing genetically engineered pigs for xenotransplantation are already implementing different strategies for their ‘products’. Revivicor, which provided the porcine hearts for the first cardiac xenotransplants, and eGenesis, which respectively provided the first kidney, each implemented 10 or more genetic modifications to the source pigs. Meanwhile, scientists in Germany advocate that fewer genetic edits to the source pigs are a safer and more reproducible approach . These competing approaches have the potential to develop into national market strategies as companies work to develop different models of genetically modified pigs as new biocommodities. This could lead to challenges in decision-making options for xenotransplantation patients, and is likely to pose new regulatory challenges . Additionally, there is uncertainty regarding how the provision of xenotransplants will affect allotransplantation: Will it result in the anticipated surplus of organs or will it lead to a decrease in allotransplants as potential donors perceive a diminished need? This outcome hinges on whether xenotransplants prove to be on par, inferior or superior to allografts. However, the outcome also rests on whether potential allotransplant donors (cadaveric and living) will see human organ donation as necessary given the availability of animal sources. In addition, bereaved families who endure significant pain when agreeing to the organ transplantation of a deceased family member may decide against it if an alternative becomes available. This would inevitably reduce the availability of organs rather than increase them even if xenotransplantation is viable. Furthermore, it is uncertain whether hierarchies of patient accessibility to allo- and xenografts will arise (eg, as shaped by socioeconomic disparities or insured status). Operating at the extremes of the possible, xenotransplantation—besides creating human–animal hybrids that evoke various transhuman and posthuman imaginaries—also introduces a new type of social figure: The brain-dead individual who is sustained for the sole purpose of experimental xenotransplant trials. This individual is not merely an ‘organ donor’ but rather a body or corpse ‘donor’, representing a socio-technological-legal construct used as an alternative to non-human primates as an experimental research model. The consequences of this trajectory require careful consideration and further research, including various stakeholders and affected groups involved . Lastly, we anticipate ethical questions related to the role and use of animals, which some may perceive to be resolved to resurface as a matter of public opinion due to the current zeitgeist. Considering these concerns and recognising that social research often unveils facets and implications that might otherwise remain concealed, we deem it essential for the sake of facilitating better-informed and responsible social and technological developments to accompany these dynamics. This entails conducting research on various aspects, including the patient (and relatives’) perspective (consent, autonomy, selection process, identity and subjectivities), the construction of the animals involved, the forms of regulation, biopolitics and bioeconomy (including the competition between various differently engineered pigs/products) in play on a national and global scale, the knowledge constructed through xenotransplantation, public opinion and the various publics produced through xenotransplantation, the opinions of researchers, clinicians and other stakeholders and the interplay between allotransplantation and xenotransplantation, along with their dynamics and management among other relevant factors. By emphasising the importance of the social issues and challenges discussed in this article, we hope to raise awareness among medical staff, researchers and policymakers involved in xenotransplantation regarding these matters.
Quality of Digital Health Interventions Across Different Health Care Domains: Secondary Data Analysis Study
07c35196-ebff-47a9-958e-d79ba096907c
10704310
Ophthalmology[mh]
According to a report from 2021 , there were more than 350,000 digital health interventions (DHIs) available in the app stores. And in 2020, more than 91,000 DHIs had been added to app stores, which amounts to 251 DHIs per day (on average). Moreover, searches for DHIs within app stores have also increased . A potential catalyst for this could have been the COVID-19 pandemic and restricted access to incumbent services. Nevertheless, these findings clearly indicate that the public has a great interest in the use of DHIs. However, some of these DHIs may contain harmful content. For example, a study from 2016 conducted a systematic assessment of suicide prevention and deliberate self-harm mobile apps. The study found that some of the apps encouraged risky behaviors, such as the uptake of drugs. Similarly, reviews across different health care domains have demonstrated that many DHIs raise safety, security, or data privacy (DP) concerns , include incomplete or misleading medical information , or have not been supported by sufficient scientific evidence . This indicates that the assessment of DHIs for adherence to best practice standards is critical to ensuring user safety and DHI’s effectiveness, as well as allowing health care professionals to confidently recommend DHIs to clients or patients. Previous systematic reviews have shown that there are numerous existing assessments for DHI evaluation by experts and users that encompass a large number of heterogenous assessment criteria . However, many assessment frameworks demonstrate shortcomings, such as being limited to DHIs in a particular health care domain , not including important assessment areas such as DP , or being focused on health care professionals without providing meaningful insights to end users . Assessment that addresses these issues, including disease-independent criteria across key areas and making assessment results easily accessible to end users, is needed. The Organisation for the Review of Care and Health Apps (ORCHA) is a UK-based digital health compliance company that specializes in the assessment of DHI quality in terms of compliance with best practice standards. The Organisation for the Review of Care and Health Apps Baseline Review (OBR) provided by ORCHA is a proxy for DHI’s compliance with best practice standards. ORCHA is currently working with 70% of National Health Service (NHS) organizations within England and provides DHI libraries, hosted by various health care organizations, that contain information about DHIs that have been assessed with the OBR. Specifically, the OBR results provide information (including an assessment score between 0 and 100) regarding DHIs’ compliance with best practices in the domains of professional and clinical assurance (PCA), DP, and user experience (UX), allowing end users and clinical professionals to make informed decisions on whether to use or recommend these DHIs. Notably, the OBR has been applied to thousands of DHIs, which provides a valuable data set for the investigation of best practice compliance across different types of DHIs. For instance, compliance may vary between DHIs in different health care domains and with different levels of risk (eg, as per the National Institute for Health and Care Excellence [NICE] Evidence Standard Framework [ESF] tier classification ). Gaining insights into such variations is important to determine what factors may drive high or low compliance with best practices and which health care domains require more effort and investment to improve the quality of DHIs. Moreover, an understanding of how compliance varies among different types of DHIs can serve as a future reference point for determining how particular DHIs compare to other similar DHIs. This study aimed to explore these questions using a data set comprising OBR assessment results for 1574 DHIs. In this study, we explore OBR scores regarding 3 NICE tiers and compare the quality of DHIs across 26 different health care domains. We do this by establishing quantiles for each tier and for each health care domain. We want to determine if OBR scores are different across DHIs in different health care domains. This will allow us to identify health care domains that may require more investment to support their development. We hypothesize that the quality of DHIs is different across several health care domains. The Data Set and Assessment For this study, ORCHA provided a data set comprising raw data from 1574 DHIs, which were assessed using the OBR version 6 tool . The OBR version 6 is the latest version of the “ORCHA assessment tool,” which consists of almost 300 objective (mostly dichotomous “yes” or “no”) assessment questions in 3 areas: PCA, DP, and UX. Each of the areas is scored individually on a scale from 0 to 100 and combined into an overall ORCHA score. NICE tiers classify DHIs based on their functionality, risk, and regulatory status. Tier A indicates that the DHIs provide health and social care services with no measurable user outcome. Tier B denotes that the DHIs can provide 2-way communication between users and health care professionals and provide health care information or a health diary. Tier C indicates that DHIs provide preventative behavioral change aimed at health issues; they may allow users to self-manage a specific condition, indicate that DHIs provide or guide treatment for a condition, record and transmit data about this condition to a professional, caregiver, or third party without a user’s input, contain a calculator that impacts treatment, provide diagnostics for a specific condition, or guide a diagnosis . Since the NICE tiers are dependent upon functionality, risk, and regulatory status, it would be inappropriate to, for example, use the OBR score on a tier C DHI using the OBR tier B scoring. An ORCHA threshold score of 65 is an NHS-accepted cutoff point that indicates compliance with best practice standards for DHIs, meaning that the DHI may be used or recommended by NHS staff. The score of 65 was established with NHS partners in 2020 and has since remained there. It represents the point at which (in the majority), excess risks are avoided; that is, you cannot possibly score above 65 while having no privacy policy, having no relevant evidence, or being a medical device that is not certified. An ORCHA score of 65 is also an initial score for all the DHIs being assessed in all assessment areas (UX, PCA, and DP). Meaning that the initial score at the beginning of the assessment is 65 for each assessment area and overall ORCHA score. Then, based on answers to assessment questions, this score is altered through value and risk points (value points increase the score and risk points reduce the score) and assigned to a DHI. This process changes the initial score of 65 for each assessment area and is then combined to give an overall ORCHA score. For example, for apps that store personal or sensitive information, value points are assigned to such an app if they make their privacy policy immediately available when the user first uses the DHI. And risk points are assigned if a privacy policy is not clearly available when using the DHI. The amount of value and risk points assigned per question vary based on the NICE ESF tier that has been assigned to a DHI. If no value or risk points were assigned during the assessment, then the ORCHA score remains 65 . Furthermore, to receive full points for appropriate evidence for its ESF tier, a tier B DHI (depending on its exact functionality) may only require a user benefits statement (eg, based on pilot results) and validation of the provided information by experts or references, while a tier C DHI will likely require a full-scale observational study or randomized controlled trial to meet the same evidence threshold. These differences in evidence requirements were introduced by the NICE ESF and adopted with slight amendments by the ORCHA assessment to ensure that standards are realistic and achievable for DHI companies without placing an undue burden on developers of low-risk DHIs, while at the same time setting expectations sufficiently high (especially for high-risk DHIs) to ensure safety and effectiveness and to provide users and health care providers with confidence in the DHIs. Some questions in the ORCHA assessment tool do not assign value or risk points but are there to provide information or context; for example, the question “When was the last Care Quality Commission inspection completed?” does not assign value or risk points. Each assessment of the 1574 apps has been carried out by at least 2 trained ORCHA reviewers as part of “business as usual” for ORCHA, where in the case of a dispute, a third ORCHA reviewer would resolve it during a discussion. All ORCHA reviewers have undergone the same training to use the OBR version 6 assessment tool. It takes around 6 months for an ORCHA reviewer to be trained on how to use the OBR and considered ready to carry out live reviews using the tool. The training involves teaching the new reviewer about each area (UX, PCA, and DP) of the OBR. Training is carried out either in person or through web-based meetings. The data set used included DHI assessments that were published between January 18, 2021, and January 6, 2022. All DHIs were assigned to 26 different health care domains and to 1 of the 3 NICE tiers, established by the NICE ESF . Statistical Analysis We carried out secondary data analyses of an ORCHA data set, which comprised the assessment of 1574 DHIs. The data analysis was carried out using R Studio (The R Foundation) and the R programming language (R Core Team). Descriptive statistics, including the minimum score, first quantile, median, mean (SD), third quantile, maximum score, and SE of the mean, were calculated for each of the OBR scores (ORCHA, PCA, DP, and UX). Box plots were generated to study each score per NICE tier. DHIs were also grouped and analyzed across the different health care domains, with the sample size (number of DHIs) for each health care domain presented. Each OBR score (ORCHA, PCA, DP, and UX) per health care domain has been presented in quantiles from 0% to 100% in increments of 25%. Quantiles have been used so that an easy comparison could be made between different scores, NICE ESF tiers, and health care domains. Normality testing (the Shapiro-Wilk test ) was used to determine which hypothesis test was appropriate. The Kruskal-Wallis rank sum test was used to compare the scores between the different NICE tiers, with a P <.05 considered statistically significant. The Kruskal-Wallis rank sum test was used to compare the scores across the health care domains with post hoc analysis using the Dunn test and Holm’s method for P value adjustment for multiple pairwise comparisons. A 2-sided unpaired Wilcoxon rank sum test has been used to determine if DHIs’ with International Organization for Standardization (ISO) 27001 certification are statistically different from those without, regarding DP scores. The Wilcoxon rank sum test was also used to determine if DHIs classified as medical devices are statistically significantly different than those that are not medical devices, regarding PCA scores. After the Wilcoxon rank sum test, Cliff delta has been used to indicate the magnitude of the difference between 2 compared samples of DHIs with a 95% CI. Cliff delta magnitude has been assessed using the thresholds |d|<.147 “negligible,” |d|<.33 “small,” |d|<.474 “medium,” otherwise “large” . The above analyses have been conducted for all DHIs, separated by NICE tiers (n=number of DHIs), tier B (n=1155), and tier C (n=408). Tier A (n=11) was excluded due to sample size. Ethical Considerations This secondary data analysis study gained ethical approval (project number: CEBE_RE-22-002) by Ulster University (ethics filter committee, Faculty of Computing, Engineering, and the Built Environment). The process undertaken by ORCHA ensures that DHIs’ developers are aware of their score and are given time to contest the findings of the assessment, which may be amended if developers provide additional relevant information. All reviews, unless explicitly asked to be removed by the developer, are covered as suitable for research in ORCHA’s privacy policy . For this study, ORCHA provided a data set comprising raw data from 1574 DHIs, which were assessed using the OBR version 6 tool . The OBR version 6 is the latest version of the “ORCHA assessment tool,” which consists of almost 300 objective (mostly dichotomous “yes” or “no”) assessment questions in 3 areas: PCA, DP, and UX. Each of the areas is scored individually on a scale from 0 to 100 and combined into an overall ORCHA score. NICE tiers classify DHIs based on their functionality, risk, and regulatory status. Tier A indicates that the DHIs provide health and social care services with no measurable user outcome. Tier B denotes that the DHIs can provide 2-way communication between users and health care professionals and provide health care information or a health diary. Tier C indicates that DHIs provide preventative behavioral change aimed at health issues; they may allow users to self-manage a specific condition, indicate that DHIs provide or guide treatment for a condition, record and transmit data about this condition to a professional, caregiver, or third party without a user’s input, contain a calculator that impacts treatment, provide diagnostics for a specific condition, or guide a diagnosis . Since the NICE tiers are dependent upon functionality, risk, and regulatory status, it would be inappropriate to, for example, use the OBR score on a tier C DHI using the OBR tier B scoring. An ORCHA threshold score of 65 is an NHS-accepted cutoff point that indicates compliance with best practice standards for DHIs, meaning that the DHI may be used or recommended by NHS staff. The score of 65 was established with NHS partners in 2020 and has since remained there. It represents the point at which (in the majority), excess risks are avoided; that is, you cannot possibly score above 65 while having no privacy policy, having no relevant evidence, or being a medical device that is not certified. An ORCHA score of 65 is also an initial score for all the DHIs being assessed in all assessment areas (UX, PCA, and DP). Meaning that the initial score at the beginning of the assessment is 65 for each assessment area and overall ORCHA score. Then, based on answers to assessment questions, this score is altered through value and risk points (value points increase the score and risk points reduce the score) and assigned to a DHI. This process changes the initial score of 65 for each assessment area and is then combined to give an overall ORCHA score. For example, for apps that store personal or sensitive information, value points are assigned to such an app if they make their privacy policy immediately available when the user first uses the DHI. And risk points are assigned if a privacy policy is not clearly available when using the DHI. The amount of value and risk points assigned per question vary based on the NICE ESF tier that has been assigned to a DHI. If no value or risk points were assigned during the assessment, then the ORCHA score remains 65 . Furthermore, to receive full points for appropriate evidence for its ESF tier, a tier B DHI (depending on its exact functionality) may only require a user benefits statement (eg, based on pilot results) and validation of the provided information by experts or references, while a tier C DHI will likely require a full-scale observational study or randomized controlled trial to meet the same evidence threshold. These differences in evidence requirements were introduced by the NICE ESF and adopted with slight amendments by the ORCHA assessment to ensure that standards are realistic and achievable for DHI companies without placing an undue burden on developers of low-risk DHIs, while at the same time setting expectations sufficiently high (especially for high-risk DHIs) to ensure safety and effectiveness and to provide users and health care providers with confidence in the DHIs. Some questions in the ORCHA assessment tool do not assign value or risk points but are there to provide information or context; for example, the question “When was the last Care Quality Commission inspection completed?” does not assign value or risk points. Each assessment of the 1574 apps has been carried out by at least 2 trained ORCHA reviewers as part of “business as usual” for ORCHA, where in the case of a dispute, a third ORCHA reviewer would resolve it during a discussion. All ORCHA reviewers have undergone the same training to use the OBR version 6 assessment tool. It takes around 6 months for an ORCHA reviewer to be trained on how to use the OBR and considered ready to carry out live reviews using the tool. The training involves teaching the new reviewer about each area (UX, PCA, and DP) of the OBR. Training is carried out either in person or through web-based meetings. The data set used included DHI assessments that were published between January 18, 2021, and January 6, 2022. All DHIs were assigned to 26 different health care domains and to 1 of the 3 NICE tiers, established by the NICE ESF . We carried out secondary data analyses of an ORCHA data set, which comprised the assessment of 1574 DHIs. The data analysis was carried out using R Studio (The R Foundation) and the R programming language (R Core Team). Descriptive statistics, including the minimum score, first quantile, median, mean (SD), third quantile, maximum score, and SE of the mean, were calculated for each of the OBR scores (ORCHA, PCA, DP, and UX). Box plots were generated to study each score per NICE tier. DHIs were also grouped and analyzed across the different health care domains, with the sample size (number of DHIs) for each health care domain presented. Each OBR score (ORCHA, PCA, DP, and UX) per health care domain has been presented in quantiles from 0% to 100% in increments of 25%. Quantiles have been used so that an easy comparison could be made between different scores, NICE ESF tiers, and health care domains. Normality testing (the Shapiro-Wilk test ) was used to determine which hypothesis test was appropriate. The Kruskal-Wallis rank sum test was used to compare the scores between the different NICE tiers, with a P <.05 considered statistically significant. The Kruskal-Wallis rank sum test was used to compare the scores across the health care domains with post hoc analysis using the Dunn test and Holm’s method for P value adjustment for multiple pairwise comparisons. A 2-sided unpaired Wilcoxon rank sum test has been used to determine if DHIs’ with International Organization for Standardization (ISO) 27001 certification are statistically different from those without, regarding DP scores. The Wilcoxon rank sum test was also used to determine if DHIs classified as medical devices are statistically significantly different than those that are not medical devices, regarding PCA scores. After the Wilcoxon rank sum test, Cliff delta has been used to indicate the magnitude of the difference between 2 compared samples of DHIs with a 95% CI. Cliff delta magnitude has been assessed using the thresholds |d|<.147 “negligible,” |d|<.33 “small,” |d|<.474 “medium,” otherwise “large” . The above analyses have been conducted for all DHIs, separated by NICE tiers (n=number of DHIs), tier B (n=1155), and tier C (n=408). Tier A (n=11) was excluded due to sample size. This secondary data analysis study gained ethical approval (project number: CEBE_RE-22-002) by Ulster University (ethics filter committee, Faculty of Computing, Engineering, and the Built Environment). The process undertaken by ORCHA ensures that DHIs’ developers are aware of their score and are given time to contest the findings of the assessment, which may be amended if developers provide additional relevant information. All reviews, unless explicitly asked to be removed by the developer, are covered as suitable for research in ORCHA’s privacy policy . Overview presents a summary of the OBR scores for all DHIs. A Kruskal-Wallis test revealed that the distributions of the UX, PCA, and DP scores were statistically significantly different from each other ( P <.001). A shows that UX scores have the least variance of the 3 assessment areas, whereas PCA scores have the greatest variance. shows that the SD for UX scores is 8.20, whereas the SD for PCA scores is 24.8, which is approximately 3 times greater than the SD of the UX scores. The UX scores are also typically higher than the other scores. A total of 57.3% (902/1574) of DHIs in the data set have an ORCHA score below the accepted ORCHA threshold of 65. contains the number of DHIs with varied ORCHA thresholds. contains the steps involved in selecting 1574 DHIs. shows the quantiles for each of the scores from 0% to 100% in increments of 25%. The ORCHA score for the 50% (median) quantile is 61.5, which is below ORCHA’s threshold score of 65, meaning that most of the DHIs in the data set fail to adhere to the NHS cutoff for compliance with best practice standards. Median (IQR) for the OBR scores are ORCHA (61.5, IQR 51.0-73.0), UX (75.2, IQR 70.0-79.6), PCA (49.6, IQR 31.9-76.1), and DP (65.1, IQR 55.0-73.4). Scores per NICE Tier DHIs were distributed as follows across the different NICE tiers: tier A (11/1574, 0.699%), tier B (1155/1574, 73.4%), and tier C (408/1574, 25.9%). depicts box plots for the OBR scores within each tier. Further information is provided in , which depicts quantiles for each score and NICE tier permutations from 0% to 100% in increments of 25%. Scores by Health Care Domains The highest number of DHIs fell into the health care domains of healthy living (n=548) and mental health (n=436). provides a table of health care domains, including the number of DHIs within each health care domain (ie, the sample size) and the scores’ quantiles from 0% to 100% in increments of 25%. - show the distribution of scores within each health care domain as box plots in descending order of the median (except for the first box plot that shows overall performance). Further details regarding OBR scores (ORCHA, UX, PCA, and DP) for each health care domain can be found in and . Kruskal-Wallis rank sum tests were used to check for statistically significant differences between DHI categories. A statistically significant result ( P <.001) was obtained for all OBR scores (ORCHA, UX, PCA, and DP) meaning that for all the scores at least 1 health care domain distribution is statistically significantly different from another. A post hoc analysis was conducted using the Dunn test to identify which categories are statistically different from each other . For all DHIs, a total of 46.2% (12/26) health care domains had a median ORCHA score of 65 or more. The apps in each of the health care domains presented in descending order of quality (median ORCHA score; n) are as follows: respiratory (median 74.0; n=77), urology (median 74.0; n=15), first aid (median 70.5; n=14), gastrointestinal (median 69.0; n=24), cardiology (median 68.5; n=34), children’s health (median 68.0; n=71), cancer (median 68.0; n=54), social support network (median 67.0; n=17), musculoskeletal disorders (median 67.0; n=53), neurodiverse (median 66.3; n=52), pregnancy (median 66.0; n=82), and neurological (median 65.0; n=136). A total of 53.8% (14/26) health care domains had a median ORCHA score of less than 65. These, in descending order, are as follows: utilities or administration (median 64.5; n=55); diabetes (median 63.5; n=81); dermatology (median 63.0; n=29); pain management (median 62.8; n=44); medicines and clinical reference (median 62.0; n=148); healthy living (median 61.0; n=548); older adult (median 61.0; n=13); mental health (median 60.0; n=436); ear, nose, throat, and mouth (median 60.5; n=23); dental care (median 59.0; n=25); women’s health (median 57.0; n=67); sexual health (median 57.0; n=58); allergy (median 55.8; n=14); and ophthalmology (median 48.0; n=56). For tier B, a total of 57.7% (15/26) health care domains had a median ORCHA score of 65 or more. These, in descending order, are as follows: cancer (median 75.0; n=37), respiratory (median 73.0; n=49), urology (median 71.5; n=12), pregnancy (median 70.8; n=56), first aid (median 70.5; n=14), utilities or administration (median 70.0; n=41), children’s health (median 68.0; n=66), social support network (median 68.0; n=16), neurological (median 68.0; n=104), medicines and clinical reference (median 68.0; n=95), neurodiverse (median 67.5; n=48), diabetes (median 67.5; n=33), musculoskeletal disorders (median 67.0; n=30), older adult (median 66.5; n=10), and cardiology (median 66.0; n=16). A total of 42.3% (11/26) health care domains had a median ORCHA score of less than 65. These, in descending order, are as follows: dermatology (median 64.3; n=16); pain management (median 63.5; n=35); mental health (median 62.0; n=332); sexual health (median 62.0; n=27); healthy living (median 61.0; n=436); dental care (median 59.5; n=20); women’s health (median 58.3; n=36); allergy (median 58.0; n=9); gastrointestinal (median 56.0; n=13); ear, nose, throat, and mouth (median 55.0; n=17); and ophthalmology (median 50.0; n=30). For tier C, a total of 24% (6/25; no “first aid” health care domain) health care domains had a median ORCHA score of 65 or more. These, in descending order, are as follows: urology (median 79.0; n=3), respiratory (median 74.5; n=28), cardiology (median 72.5; n=18), gastrointestinal (median 71.0; n=11), children’s health (median 70.0; n=5), and musculoskeletal disorders (median 68.0; n=23). A total of 76% (19/25) health care domains had a median ORCHA score of less than 65. These, in descending order, are as follows: cancer (median 64.0; n=17); diabetes (median 63.0; n=48); ear, nose, throat, and mouth (median 61.8; n=6); healthy living (median 60.0; n=106); pain management (median 59.0; n=9); women’s health (median 57.0; n=31); neurological (median 57.0; n=30); utilities or administration (median 57.0; n=14); medicines and clinical reference (median 55.5; n=49); mental health (median 55.0; n =102); pregnancy (median 55.0; n=26); older adult (median 55.0; n=2); sexual health (median 51.0; n=31); dermatology (median 46.0; n=13); neurodiverse (median 46.0; n=3); dental care (median 44.0; n=5); ophthalmology (median 43.3; n=26); allergy (median 42.0; n=5); and social support network (median 34.0; n=1). contains UX, PCA, and DP assessment areas ranked in order, and contains rank consistency. contains Distribution of DHIs across NICE Evidence Standards Framework (ESF) tiers by healthcare domain. Partition of DHIs by ISO Certification and Medical Device Designation Using median (IQR), the following difference has been found in DP scores among DHIs that received ISO 27001 certification (79.4, IQR 73.6-85.3; n=77) and those that did not (65.0, IQR 54.1-72.4; n=1497), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.704 (95% CI 0.620-0.772). The following difference has been found in PCA scores among DHIs that have been designated as “medical device” (58.8, IQR 33.7-84.4; n=162) and those that were not (49.3, IQR 31.9-76.1; n=1412), with a 2-sided unpaired Wilcoxon rank sum test with P =.003 and Cliff delta =.143 (95% CI 0.040-0.243). For tier B, the following difference has been found in DP scores among DHIs that received ISO 27001 certification (78.5, IQR 71.4-81.8; n=42) and those that did not (65.0, IQR 53.8-72.2; n=1113), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.667 (95% CI 0.541-0.764). The following difference has been found in PCA scores among DHIs that have been designated as “medical device” (78.3, IQR 41.8-86.7; n=23) and those that were not (50.9, IQR 31.9-76.1; n=1132), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.644 (95% CI 0.470-0.769). For tier C, the following difference has been found in DP scores among DHIs that received ISO 27001 certification (83.2, IQR 75.7-86.4; n=35) and those that did not (66.8, IQR 54.1-73.6; n=373), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.724 (95% CI 0.604-0.812). The following difference has been found in PCA scores among DHIs that have been designated as “medical device” (43.7, IQR 28.7-80.8; n=139) and those that were not (41.1, IQR 30.3-68.2; n=269), with a 2-sided unpaired Wilcoxon rank sum test with P =.002 and Cliff delta =.183 (95% CI 0.061-0.300). presents a summary of the OBR scores for all DHIs. A Kruskal-Wallis test revealed that the distributions of the UX, PCA, and DP scores were statistically significantly different from each other ( P <.001). A shows that UX scores have the least variance of the 3 assessment areas, whereas PCA scores have the greatest variance. shows that the SD for UX scores is 8.20, whereas the SD for PCA scores is 24.8, which is approximately 3 times greater than the SD of the UX scores. The UX scores are also typically higher than the other scores. A total of 57.3% (902/1574) of DHIs in the data set have an ORCHA score below the accepted ORCHA threshold of 65. contains the number of DHIs with varied ORCHA thresholds. contains the steps involved in selecting 1574 DHIs. shows the quantiles for each of the scores from 0% to 100% in increments of 25%. The ORCHA score for the 50% (median) quantile is 61.5, which is below ORCHA’s threshold score of 65, meaning that most of the DHIs in the data set fail to adhere to the NHS cutoff for compliance with best practice standards. Median (IQR) for the OBR scores are ORCHA (61.5, IQR 51.0-73.0), UX (75.2, IQR 70.0-79.6), PCA (49.6, IQR 31.9-76.1), and DP (65.1, IQR 55.0-73.4). DHIs were distributed as follows across the different NICE tiers: tier A (11/1574, 0.699%), tier B (1155/1574, 73.4%), and tier C (408/1574, 25.9%). depicts box plots for the OBR scores within each tier. Further information is provided in , which depicts quantiles for each score and NICE tier permutations from 0% to 100% in increments of 25%. The highest number of DHIs fell into the health care domains of healthy living (n=548) and mental health (n=436). provides a table of health care domains, including the number of DHIs within each health care domain (ie, the sample size) and the scores’ quantiles from 0% to 100% in increments of 25%. - show the distribution of scores within each health care domain as box plots in descending order of the median (except for the first box plot that shows overall performance). Further details regarding OBR scores (ORCHA, UX, PCA, and DP) for each health care domain can be found in and . Kruskal-Wallis rank sum tests were used to check for statistically significant differences between DHI categories. A statistically significant result ( P <.001) was obtained for all OBR scores (ORCHA, UX, PCA, and DP) meaning that for all the scores at least 1 health care domain distribution is statistically significantly different from another. A post hoc analysis was conducted using the Dunn test to identify which categories are statistically different from each other . For all DHIs, a total of 46.2% (12/26) health care domains had a median ORCHA score of 65 or more. The apps in each of the health care domains presented in descending order of quality (median ORCHA score; n) are as follows: respiratory (median 74.0; n=77), urology (median 74.0; n=15), first aid (median 70.5; n=14), gastrointestinal (median 69.0; n=24), cardiology (median 68.5; n=34), children’s health (median 68.0; n=71), cancer (median 68.0; n=54), social support network (median 67.0; n=17), musculoskeletal disorders (median 67.0; n=53), neurodiverse (median 66.3; n=52), pregnancy (median 66.0; n=82), and neurological (median 65.0; n=136). A total of 53.8% (14/26) health care domains had a median ORCHA score of less than 65. These, in descending order, are as follows: utilities or administration (median 64.5; n=55); diabetes (median 63.5; n=81); dermatology (median 63.0; n=29); pain management (median 62.8; n=44); medicines and clinical reference (median 62.0; n=148); healthy living (median 61.0; n=548); older adult (median 61.0; n=13); mental health (median 60.0; n=436); ear, nose, throat, and mouth (median 60.5; n=23); dental care (median 59.0; n=25); women’s health (median 57.0; n=67); sexual health (median 57.0; n=58); allergy (median 55.8; n=14); and ophthalmology (median 48.0; n=56). For tier B, a total of 57.7% (15/26) health care domains had a median ORCHA score of 65 or more. These, in descending order, are as follows: cancer (median 75.0; n=37), respiratory (median 73.0; n=49), urology (median 71.5; n=12), pregnancy (median 70.8; n=56), first aid (median 70.5; n=14), utilities or administration (median 70.0; n=41), children’s health (median 68.0; n=66), social support network (median 68.0; n=16), neurological (median 68.0; n=104), medicines and clinical reference (median 68.0; n=95), neurodiverse (median 67.5; n=48), diabetes (median 67.5; n=33), musculoskeletal disorders (median 67.0; n=30), older adult (median 66.5; n=10), and cardiology (median 66.0; n=16). A total of 42.3% (11/26) health care domains had a median ORCHA score of less than 65. These, in descending order, are as follows: dermatology (median 64.3; n=16); pain management (median 63.5; n=35); mental health (median 62.0; n=332); sexual health (median 62.0; n=27); healthy living (median 61.0; n=436); dental care (median 59.5; n=20); women’s health (median 58.3; n=36); allergy (median 58.0; n=9); gastrointestinal (median 56.0; n=13); ear, nose, throat, and mouth (median 55.0; n=17); and ophthalmology (median 50.0; n=30). For tier C, a total of 24% (6/25; no “first aid” health care domain) health care domains had a median ORCHA score of 65 or more. These, in descending order, are as follows: urology (median 79.0; n=3), respiratory (median 74.5; n=28), cardiology (median 72.5; n=18), gastrointestinal (median 71.0; n=11), children’s health (median 70.0; n=5), and musculoskeletal disorders (median 68.0; n=23). A total of 76% (19/25) health care domains had a median ORCHA score of less than 65. These, in descending order, are as follows: cancer (median 64.0; n=17); diabetes (median 63.0; n=48); ear, nose, throat, and mouth (median 61.8; n=6); healthy living (median 60.0; n=106); pain management (median 59.0; n=9); women’s health (median 57.0; n=31); neurological (median 57.0; n=30); utilities or administration (median 57.0; n=14); medicines and clinical reference (median 55.5; n=49); mental health (median 55.0; n =102); pregnancy (median 55.0; n=26); older adult (median 55.0; n=2); sexual health (median 51.0; n=31); dermatology (median 46.0; n=13); neurodiverse (median 46.0; n=3); dental care (median 44.0; n=5); ophthalmology (median 43.3; n=26); allergy (median 42.0; n=5); and social support network (median 34.0; n=1). contains UX, PCA, and DP assessment areas ranked in order, and contains rank consistency. contains Distribution of DHIs across NICE Evidence Standards Framework (ESF) tiers by healthcare domain. Using median (IQR), the following difference has been found in DP scores among DHIs that received ISO 27001 certification (79.4, IQR 73.6-85.3; n=77) and those that did not (65.0, IQR 54.1-72.4; n=1497), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.704 (95% CI 0.620-0.772). The following difference has been found in PCA scores among DHIs that have been designated as “medical device” (58.8, IQR 33.7-84.4; n=162) and those that were not (49.3, IQR 31.9-76.1; n=1412), with a 2-sided unpaired Wilcoxon rank sum test with P =.003 and Cliff delta =.143 (95% CI 0.040-0.243). For tier B, the following difference has been found in DP scores among DHIs that received ISO 27001 certification (78.5, IQR 71.4-81.8; n=42) and those that did not (65.0, IQR 53.8-72.2; n=1113), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.667 (95% CI 0.541-0.764). The following difference has been found in PCA scores among DHIs that have been designated as “medical device” (78.3, IQR 41.8-86.7; n=23) and those that were not (50.9, IQR 31.9-76.1; n=1132), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.644 (95% CI 0.470-0.769). For tier C, the following difference has been found in DP scores among DHIs that received ISO 27001 certification (83.2, IQR 75.7-86.4; n=35) and those that did not (66.8, IQR 54.1-73.6; n=373), with a 2-sided unpaired Wilcoxon rank sum test with P <.001 and Cliff delta =.724 (95% CI 0.604-0.812). The following difference has been found in PCA scores among DHIs that have been designated as “medical device” (43.7, IQR 28.7-80.8; n=139) and those that were not (41.1, IQR 30.3-68.2; n=269), with a 2-sided unpaired Wilcoxon rank sum test with P =.002 and Cliff delta =.183 (95% CI 0.061-0.300). Principal Findings A total of 57.3% (902/1574) DHIs in the data set failed to meet the ORCHA (a proxy for overall quality) threshold score of 65. The UX score was consistently the highest out of the 3 assessment areas (UX, PCA, and DP). The UX score also had the least variance when compared with other OBR scores. We found that scores differed widely between different health care domains. However, only some differences achieved statistical significance (Dunn test in ). The analysis revealed that the highest ORCHA scores were observed in the respiratory health care domain and the lowest in the ophthalmology health care domain ( and ). There have been several studies that suggest DHIs’ quality could be further improved . By identifying health care domains that have low OBR DHIs, this study indicates where a greater effort is needed to quality-assure these DHIs. shows that the largest variance has been observed in the PCA assessment area (PCA score IQR 44.2 and SD 24.8), which includes criteria related to the availability of scientific evidence to support the content and efficacy or effectiveness of the DHIs. This variation in clinical assurance across different DHIs is consistent with previous research. For instance, a paper from 2021 found that evidence to support the claims made by health apps is often unavailable or of questionable quality. Similarly, a systematic review and exploratory meta-analysis from 2017 with a focus on diagnostic apps found that the evidence for the diagnostic performance of health apps is limited. Additionally, a meta-analysis of randomized controlled trials from 2021 concluded that, while there has been an increase in the rigorous evaluation of apps aimed at modifying behavior to promote health and manage disease, the evidence that such apps can improve health outcomes is weak. Previous work has been done on benchmarking DHI system usability scores (SUS) across digital health apps and for heart failure apps . This study differs as it focuses on comparing DHIs using a broader selection of DHIs across health care domains and assessment areas (UX, PCA, and DP). Previous work from 2020 has introduced an implementation framework called Technology Evaluation and Assessment Criteria for Health Apps. The aim of the framework is to enable users to make informed decisions regarding app use and increase app evaluation engagement by introducing a process to assist app implementation (Technology Evaluation and Assessment Criteria for Health Apps) across all DHIs. This study differs as it not only enables users to make informed decisions regarding app use but also enables the comparison of DHIs across health care domains. This study also identifies which health care domains may need more attention regarding their quality. Compliance With Best Practices Across Health Care Domains This study further observed differences in best practice compliance among health care domains. While DP and UX median scores were relatively similar across health care domains, large differences were observed between PCA scores ( and and ). A potential partial explanation for these findings may be that the proportion of DHIs within different tiers, and thus with different levels of evidence requirements (see above), may vary among health care domains. This suggestion is partially supported by the data, as a large proportion of DHIs in health care domains with high PCA scores fall into tiers A or B rather than C ( and ). For all DHIs, a total of 12 of the 26 health care domains had a median ORCHA score of 65 or more. And a total of 14 of the 26 health care domains had a median ORCHA score of less than 65. For tier B, a total of 15 of the 26 health care domains had a median ORCHA score of 65 or more. And 11 of the 26 health care domains had a median ORCHA score of less than 65. For tier C, a total of 6 of the 25 (no “first aid” health care domain) health care domains had a median ORCHA score of 65 or more. And 19 of the 25 health care domains had a median ORCHA score of less than 65. Respiratory and urology DHIs were consistently highly ranked in NICE tiers B and C ( , and ). The data indicate that DHIs that have received ISO 27001 certification (median 79.4, IQR 73.6-85.3; n=77) score higher regarding their DP score than those that have not (median 65.0, IQR 54.1-72.4; n=1497). The difference was statistically significant with a Wilcoxon rank sum test with P <.001 and Cliff delta =.704, indicating a large difference in DP scores. Similar results were obtained when the DHIs were partitioned by tiers B and C, as can be seen in the “Partition of DHIs by ISO Certification and Medical Device Designation” section. DHIs that have been designated as medical device (median 58.8, IQR 33.7-84.4; n=162), scored higher on PCA than those that were not (median 49.3, IQR 31.9-76.1; n=1412). The difference was statistically significant with a Wilcoxon rank sum test with P =.003 and Cliff delta=.143, indicating a negligible difference in PCA scores. However, when partitioned by NICE tiers B and C, as can be seen in the “Partition of DHIs by ISO Certification and Medical Device Designation” section, results showed that for tier B DHIs that have been designated as medical device (median 78.3, IQR 41.8-86.7; n=23) scored higher than those that were not (median 50.9, IQR 31.9-76.1; n=1132). And had a Wilcoxon rank sum test with P <.001 and Cliff delta =.644, indicating a large difference in PCA scores. For tier C, DHIs that have been designated as medical device (median 43.7, IQR 28.7-80.8; n=139) and those that were not (median 41.1, IQR 30.3-68.2; n=269) had a Wilcoxon rank sum test with P =.002, but a much lower Cliff delta =.183, indicating a negligible difference in PCA scores. Medical device DHIs seem to be outperforming nonmedical device DHIs regarding PCA scores. Especially medical device DHIs in tier B. Speculation can be made that since medical device DHIs have regulatory requirements , more is expected of them regarding PCA. This leads to low PCA scores among tier C apps, with a negligible difference in PCA score between medical device and nonmedical device DHIs, according to Cliff delta. However, since medical device DHIs are typically assigned to tier C, they outperform nonmedical device DHIs in tier B as developers attempt to meet regulatory demands. An alternative interpretation is that since medical device regulation is a gold standard where clinical evidence is evaluated, it would be expected to see higher PCA scores for DHIs designated as medical devices than nonmedical devices in tier C, similarly to DHIs in tier B. It could be that the PCA score for tier C is an inappropriate measure of clinical evidence. Meaning that the criteria for tier C DHIs are not ideal to differentiate between different levels of evidence. A study from 2020 focused on the value of mobile health (mHealth) for patients. Their analysis found that the highest level of clinical evidence for mHealth apps used for clinical scenarios is scarce. The analysis presented in this study identifies health care domains where DHIs may require improvements regarding their quality. Hence, this study may be helpful in mitigating the problem of scarce evidence regarding the quality of DHIs. The current findings indicate that OBR scores differ among DHIs in different NICE tiers and health care domains. In the long term, the aim should be to elevate DHIs in lower-scoring categories to achieve an ORCHA threshold score of 65. The quantiles presented in and can be used for the identification of low-quality DHIs as indicated by OBR scores. After receiving OBR scores, a specific DHI can be compared with other DHIs in the same health care domain or NICE tier using quantiles. This will reveal how compliant the DHI is with best practice standards relative to similar DHIs. These comparisons can be conducted with ORCHA scores or for the separate assessment areas (UX, PCA, and DP; and ). Limitations A few limitations of this study should be noted. There were uneven sample sizes for DHIs across NICE tiers (the sample size ranged from 11 to 1155 DHIs) and health care domains (the sample size ranged from 13 to 548 DHIs). When partitioning the data, lower samples in tiers and categories lead to less reliable results in those tiers and categories. Where the case was that the same DHIs were assessed twice, that is, the Android version and the iOS version (n=466 DHIs; ), the mean OBR scores were calculated using the Android and iOS assessments, and the result was included in the analysis. However, it is possible that if the names of the DHIs were somewhat different for the Android and iOS versions, both would have been included in the analysis as separate DHIs. The OBR version 6 evolved from earlier versions of the OBR during the height of the COVID-19 pandemic. Originally, version 6 was created as a more stringent version of the OBR so that ORCHA could recommend the most compliant DHIs to members of the UK population with confidence. ORCHA tested version 6 on a selection of highly compliant DHIs (as determined by previous versions of the OBR). This set of 30 DHIs served as the pilot group, with the subsequent 2097 DHIs being assessed with ORCHA’s typical assessment approach of categorizing DHIs into categories, ordering by number of downloads, and assessing the most downloaded DHI in each health care domain, followed by the second, and so forth. Future Work The concurrent validity can be performed on the ORCHA assessment tool by comparing ORCHA scores against other assessment frameworks (eg, Mobile Application Rating Scale ). The analysis conducted in this paper could be repeated with more DHIs in tier A. Conclusion This study examined assessment data for 1574 DHIs and found that 57.3% (902/1574) of the DHIs in the data set failed to meet the ORCHA threshold score of 65 (accepted by the NHS as a signal of compliance with best practice standards). This work also identified differences with regard to the OBRs of DHIs in different tiers and health care domains. Appropriate evidence and clinical assurance were especially lacking in DHIs with high risk (as per their tiers), which raises safety concerns and highlights the need for DHI assessments that support users in the selection of safe and effective DHIs. Interestingly, more stringent (tier C) clinical assurance and evidence requirements seemed more likely to be met in health care domains with high funding availability, such as diabetes and cardiology. This underscores the need for more investment in health care domains that currently demonstrate low compliance with best practices, such as women’s health, ophthalmology, dental care, and allergy. Additionally, this study produced quantiles across different health care domains and NICE tiers, which could be used to compare health care domain-specific DHIs in future studies. A total of 57.3% (902/1574) DHIs in the data set failed to meet the ORCHA (a proxy for overall quality) threshold score of 65. The UX score was consistently the highest out of the 3 assessment areas (UX, PCA, and DP). The UX score also had the least variance when compared with other OBR scores. We found that scores differed widely between different health care domains. However, only some differences achieved statistical significance (Dunn test in ). The analysis revealed that the highest ORCHA scores were observed in the respiratory health care domain and the lowest in the ophthalmology health care domain ( and ). There have been several studies that suggest DHIs’ quality could be further improved . By identifying health care domains that have low OBR DHIs, this study indicates where a greater effort is needed to quality-assure these DHIs. shows that the largest variance has been observed in the PCA assessment area (PCA score IQR 44.2 and SD 24.8), which includes criteria related to the availability of scientific evidence to support the content and efficacy or effectiveness of the DHIs. This variation in clinical assurance across different DHIs is consistent with previous research. For instance, a paper from 2021 found that evidence to support the claims made by health apps is often unavailable or of questionable quality. Similarly, a systematic review and exploratory meta-analysis from 2017 with a focus on diagnostic apps found that the evidence for the diagnostic performance of health apps is limited. Additionally, a meta-analysis of randomized controlled trials from 2021 concluded that, while there has been an increase in the rigorous evaluation of apps aimed at modifying behavior to promote health and manage disease, the evidence that such apps can improve health outcomes is weak. Previous work has been done on benchmarking DHI system usability scores (SUS) across digital health apps and for heart failure apps . This study differs as it focuses on comparing DHIs using a broader selection of DHIs across health care domains and assessment areas (UX, PCA, and DP). Previous work from 2020 has introduced an implementation framework called Technology Evaluation and Assessment Criteria for Health Apps. The aim of the framework is to enable users to make informed decisions regarding app use and increase app evaluation engagement by introducing a process to assist app implementation (Technology Evaluation and Assessment Criteria for Health Apps) across all DHIs. This study differs as it not only enables users to make informed decisions regarding app use but also enables the comparison of DHIs across health care domains. This study also identifies which health care domains may need more attention regarding their quality. This study further observed differences in best practice compliance among health care domains. While DP and UX median scores were relatively similar across health care domains, large differences were observed between PCA scores ( and and ). A potential partial explanation for these findings may be that the proportion of DHIs within different tiers, and thus with different levels of evidence requirements (see above), may vary among health care domains. This suggestion is partially supported by the data, as a large proportion of DHIs in health care domains with high PCA scores fall into tiers A or B rather than C ( and ). For all DHIs, a total of 12 of the 26 health care domains had a median ORCHA score of 65 or more. And a total of 14 of the 26 health care domains had a median ORCHA score of less than 65. For tier B, a total of 15 of the 26 health care domains had a median ORCHA score of 65 or more. And 11 of the 26 health care domains had a median ORCHA score of less than 65. For tier C, a total of 6 of the 25 (no “first aid” health care domain) health care domains had a median ORCHA score of 65 or more. And 19 of the 25 health care domains had a median ORCHA score of less than 65. Respiratory and urology DHIs were consistently highly ranked in NICE tiers B and C ( , and ). The data indicate that DHIs that have received ISO 27001 certification (median 79.4, IQR 73.6-85.3; n=77) score higher regarding their DP score than those that have not (median 65.0, IQR 54.1-72.4; n=1497). The difference was statistically significant with a Wilcoxon rank sum test with P <.001 and Cliff delta =.704, indicating a large difference in DP scores. Similar results were obtained when the DHIs were partitioned by tiers B and C, as can be seen in the “Partition of DHIs by ISO Certification and Medical Device Designation” section. DHIs that have been designated as medical device (median 58.8, IQR 33.7-84.4; n=162), scored higher on PCA than those that were not (median 49.3, IQR 31.9-76.1; n=1412). The difference was statistically significant with a Wilcoxon rank sum test with P =.003 and Cliff delta=.143, indicating a negligible difference in PCA scores. However, when partitioned by NICE tiers B and C, as can be seen in the “Partition of DHIs by ISO Certification and Medical Device Designation” section, results showed that for tier B DHIs that have been designated as medical device (median 78.3, IQR 41.8-86.7; n=23) scored higher than those that were not (median 50.9, IQR 31.9-76.1; n=1132). And had a Wilcoxon rank sum test with P <.001 and Cliff delta =.644, indicating a large difference in PCA scores. For tier C, DHIs that have been designated as medical device (median 43.7, IQR 28.7-80.8; n=139) and those that were not (median 41.1, IQR 30.3-68.2; n=269) had a Wilcoxon rank sum test with P =.002, but a much lower Cliff delta =.183, indicating a negligible difference in PCA scores. Medical device DHIs seem to be outperforming nonmedical device DHIs regarding PCA scores. Especially medical device DHIs in tier B. Speculation can be made that since medical device DHIs have regulatory requirements , more is expected of them regarding PCA. This leads to low PCA scores among tier C apps, with a negligible difference in PCA score between medical device and nonmedical device DHIs, according to Cliff delta. However, since medical device DHIs are typically assigned to tier C, they outperform nonmedical device DHIs in tier B as developers attempt to meet regulatory demands. An alternative interpretation is that since medical device regulation is a gold standard where clinical evidence is evaluated, it would be expected to see higher PCA scores for DHIs designated as medical devices than nonmedical devices in tier C, similarly to DHIs in tier B. It could be that the PCA score for tier C is an inappropriate measure of clinical evidence. Meaning that the criteria for tier C DHIs are not ideal to differentiate between different levels of evidence. A study from 2020 focused on the value of mobile health (mHealth) for patients. Their analysis found that the highest level of clinical evidence for mHealth apps used for clinical scenarios is scarce. The analysis presented in this study identifies health care domains where DHIs may require improvements regarding their quality. Hence, this study may be helpful in mitigating the problem of scarce evidence regarding the quality of DHIs. The current findings indicate that OBR scores differ among DHIs in different NICE tiers and health care domains. In the long term, the aim should be to elevate DHIs in lower-scoring categories to achieve an ORCHA threshold score of 65. The quantiles presented in and can be used for the identification of low-quality DHIs as indicated by OBR scores. After receiving OBR scores, a specific DHI can be compared with other DHIs in the same health care domain or NICE tier using quantiles. This will reveal how compliant the DHI is with best practice standards relative to similar DHIs. These comparisons can be conducted with ORCHA scores or for the separate assessment areas (UX, PCA, and DP; and ). A few limitations of this study should be noted. There were uneven sample sizes for DHIs across NICE tiers (the sample size ranged from 11 to 1155 DHIs) and health care domains (the sample size ranged from 13 to 548 DHIs). When partitioning the data, lower samples in tiers and categories lead to less reliable results in those tiers and categories. Where the case was that the same DHIs were assessed twice, that is, the Android version and the iOS version (n=466 DHIs; ), the mean OBR scores were calculated using the Android and iOS assessments, and the result was included in the analysis. However, it is possible that if the names of the DHIs were somewhat different for the Android and iOS versions, both would have been included in the analysis as separate DHIs. The OBR version 6 evolved from earlier versions of the OBR during the height of the COVID-19 pandemic. Originally, version 6 was created as a more stringent version of the OBR so that ORCHA could recommend the most compliant DHIs to members of the UK population with confidence. ORCHA tested version 6 on a selection of highly compliant DHIs (as determined by previous versions of the OBR). This set of 30 DHIs served as the pilot group, with the subsequent 2097 DHIs being assessed with ORCHA’s typical assessment approach of categorizing DHIs into categories, ordering by number of downloads, and assessing the most downloaded DHI in each health care domain, followed by the second, and so forth. The concurrent validity can be performed on the ORCHA assessment tool by comparing ORCHA scores against other assessment frameworks (eg, Mobile Application Rating Scale ). The analysis conducted in this paper could be repeated with more DHIs in tier A. This study examined assessment data for 1574 DHIs and found that 57.3% (902/1574) of the DHIs in the data set failed to meet the ORCHA threshold score of 65 (accepted by the NHS as a signal of compliance with best practice standards). This work also identified differences with regard to the OBRs of DHIs in different tiers and health care domains. Appropriate evidence and clinical assurance were especially lacking in DHIs with high risk (as per their tiers), which raises safety concerns and highlights the need for DHI assessments that support users in the selection of safe and effective DHIs. Interestingly, more stringent (tier C) clinical assurance and evidence requirements seemed more likely to be met in health care domains with high funding availability, such as diabetes and cardiology. This underscores the need for more investment in health care domains that currently demonstrate low compliance with best practices, such as women’s health, ophthalmology, dental care, and allergy. Additionally, this study produced quantiles across different health care domains and NICE tiers, which could be used to compare health care domain-specific DHIs in future studies.
Education in focus: Significant improvements in student learning and satisfaction with ophthalmology teaching delivered using a blended learning approach
9c0316c0-2e29-4ea0-961e-c4af92b4f0e2
11216581
Ophthalmology[mh]
The necessities brought about by the COVID-19 pandemic required an inevitable shift towards online and distance learning, addressing the challenges posed by government directives, the need for social distancing, and continuing health professions education (HPE) . Reviews that investigated the developments in medical education in response to the COVID-19 pandemic highlighted that in the immediate response, the majority of interventions described a pivot to online learning. However, the need for continued clinical contact remained but was often replaced in curricula with remote, distance or telehealth. In the Best Evidence in Medical Education (BEME) rapid review, BEME 63, the authors identified a significant focus on sharing experiences, rather than robust evaluation or research enquiry, and less than 50% of studies reviewed described educational outcomes. BEME Guide no.64 acknowledged that online learning will undoubtedly continue to be a feature of medical education long after the pandemic, but encouraged educators to have a deliberate and thoughtful selection of strategies and consider the differential impacts of these approaches. BEME Guide no. 71 recognised the limitations of remote learning including the loss of social interactions, lack of hands-on experiences and challenges with technology, the authors recommend its continued use in higher education due the flexibility it offered and highlighted practical advice to optimize the online environment . Among the educational interventions that were adopted in response to the COVID-19 pandemic, the flipped classroom (FC) has been reported to be efficacious in responding to these extraordinary challenges in medical education . The FC, a form of blended learning and instructional strategy, seeks to enhance student engagement and learning. It involves students completing readings autonomously outside of scheduled class time and participating in live problem-solving activities during class time. In undergraduate ophthalmology education, studies by Diel et al found high levels of satisfaction with a FC approach and reported no changes in knowledge acquisition , and a reduction in students’ pressure to perform, course burden and anxiety, along with increased confidence in triaging common eye complaints . In our initial educational response to the pandemic at our university, we implemented a remote online flipped classroom (OFC) approach to facilitate delivery of an ophthalmology clinical attachment for medical students, and evaluated students perceptions and satisfaction with the Course Evaluation Questionnaire (CEQ) . The CEQ is used globally to determine undergraduate student satisfaction and to identify areas for improvement . There is substantial evidence supporting its reliability and validity with undergraduate and medical students , and it has been utilised in ophthalmology interventions evaluating the FC . However, the efficacy of the FC for ophthalmology education in a completely virtual setting is still insufficiently measured . We investigated student satisfaction using the CEQ following the introduction of a remote online FC, necessitated by the COVID-19 pandemic, compared to with our usual delivery format, which provided a blend of didactic lectures and clinical skills sessions . Our results contradicted existing literature on the effectiveness of a flipped-classroom approach in delivering ophthalmology content to medical students. Previous studies indicated a preference among students for the flipped classroom over the traditional lecture method, citing its benefits in developing problem-solving, creative thinking, and teamwork skills . We identified significant levels of dissatisfaction with problem solving, communication, staff motivation and provision of feedback . As the constraints imposed by government directives and the necessity for social distancing eased, we sought to re-design the ophthalmology module to incorporate learnings from our previous findings and the evidence based recommendations from BEME reviews 63, 64, 69 and 71 . For subsequent iterations of the ophthalmology module an educational strategy that combined online learning and in-person seminars with practical patient-centred sessions was adopted. It was anticipated that this blended learning approach would result in improved levels of student satisfaction and knowledge gain. In this study we investigated how a blend of traditional classroom-based and remote FC learning approaches combined with in person practical elements including direct patient contact would impact student satisfaction with the CEQ, and compare these results to those previously reported for the complete OFC delivery of ophthalmology content . Study populations Participants in this study were 4 th year senior cycle medical students enrolled in RCSI on an Ophthalmology clinical attachment that takes place 20 times during the academic year. All students undertaking the ophthalmology clinical attachment module were invited to participate. This study was reviewed and approved by the Research and Ethics Committee (REC) of the RCSI, University of Medicine and Health Sciences and was conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from all participants (REC 202006015). Group 1: Online flipped classroom (OFC) group (2019/2020 ophthalmology module) As a result of the global pandemic an online distance module was devised for students participating in the ophthalmology clinical attachment for the 2019/2020 academic year. As previously described these students (total n = 114) engaged in a curriculum solely dependent on an online flipped classroom (OFC) and for the purposes of the current study functioned as our comparison group . Recruitment period for this study cohort: 19 th October 2020 to 18 th December 2020. Group 2: Blended Learning (BL) group (2020/2021 ophthalmology module) The Blended learning (BL) of the 4 th year ophthalmology clinical attachment began on the 5 th of October 2020 in the Royal Victoria Eye and Ear Hospital (RVEEH) and finished 26 th April 2021. Recruitment period for this study cohort: 5 th April 2021 to 28 th May 2021. Students were assigned into groups by the SARA (Student, Academic & Regulatory Affairs) office RCSI (10–12 in each) before commencing their clinical attachment week. BL students attended on site teaching sessions, remote online FC sessions and in-person patient centred clinical skills teaching sessions. Module description The aims of the ophthalmology module were to enable the students to develop the clinical knowledge and skills to assess any patient presenting with an eye disorder and to formulate an appropriate differential diagnosis and management or referral plan. Our objective was to ensure constructive alignment of the module with the existing learning outcomes despite the change in delivery whilst responding to the changes precipitated the by COVID-19 pandemic regarding social distancing . On site in-person teaching : students attended sessions on the Anatomy of the Eye, History taking in ophthalmology, Patient based teaching and Clinical skills, which each consisted of 60-min small group teaching sessions (face to face lectures with 15-minute question and answer session) led by an Ophthalmologist. Online flipped classroom : students were asked to watch pre-recorded video lectures (Cataract, Glaucoma, Diabetic Retinopathy, AMD) online in advance of a one hour interactive session led by an Ophthalmologist. This was supplemented by slide sets of the didactic lecture material without audio. After the pre-class lecture students attended a synchronous, online, live interactive session on the same topic. These Blackboard Collaborate (BBc) sessions included problem solving, clinical vignettes and MCQs relating to the recorded lecture. Additional BBc sessions covered 3 other key topics including red eye, sudden loss of vision and change in appearance. Facilitators prepared clinical cases and related MCQs that addressed learning outcomes and promoted engagement for use during the interactive online session. The facilitator encouraged problem solving using the poll feature of BBc which encouraged both discussion and active learning. Clinical skills : students attended in person, practical clinical skills sessions which included the examination of the following: Snellen visual acuity, direct ophthalmoscopy, eye movements, pupil reactions, visual fields to confrontation, cover test for strabismus and the external eye examination with a pen torch. Patient centred teaching : students engaged in in-person patient led teaching practical sessions which consisted of taking patient history, reading patient charts, examining patients and a discussion regarding the outcomes of the consultation facilitated by the supervising clinical tutor. Students also attending outpatient clinics as observers, listening to patient histories, examining clinical signs and discussing patient cases with the attending doctors. Knowledge was tested upon completion of the module via a multiple- choice question (MCQ) exam. Clinical Competency (skills) were assessed by practical examination of fundoscopy skills. Digital training Blackboard Collaborate (BBc) has previously been shown to have utility as a platform to support nursing students placement learning. Several studies have highlighted the importance training to develop students digital literacy to facilitate student engagement with this form of technology . To support this guides to the use of BBc were prepared and provided to the students ahead of the online module. Digital training was provided to ophthalmology faculty along with support guides for the use of the BBc platform. Instrument and data collection To investigate student perceptions and satisfaction with the online flipped classroom, all students (257) were invited to complete the CEQ36 online via Survey Monkey. Each item of the questionnaire is answered using a standard 5-point Likert scale where the levels of agreement range from “strongly agree” (scoring a “1”) to “strongly disagree” (scoring a “5”). The CEQ36 measures six constructs established as important learning environment features within the context of higher education , and are presented in . In addition to the CEQ36 data, final anonymised MCQ exam scores were obtained for each student in the study. Statistical analysis Descriptive statistics were used to describe the characteristics of the two groups (OFC (n = 28) vs BL (n = 59)) and Chi-square test/Fisher exact test, or independent samples t-test used to explore differences between the groups. The scores of the MCQ final exam were compared using independent samples t test. The questionnaire data given to students were analysed using Mann-Whitney-U tests, to explore potential differences between the groups. During analysis responses for the ‘agree’ and ‘strongly agree’ categories were combined, similarly responses for ‘disagree’ and ‘strongly disagree’ category were combined. All statistical analyses were performed in GraphPad Prism V5 or Stata v13. Participants in this study were 4 th year senior cycle medical students enrolled in RCSI on an Ophthalmology clinical attachment that takes place 20 times during the academic year. All students undertaking the ophthalmology clinical attachment module were invited to participate. This study was reviewed and approved by the Research and Ethics Committee (REC) of the RCSI, University of Medicine and Health Sciences and was conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from all participants (REC 202006015). Group 1: Online flipped classroom (OFC) group (2019/2020 ophthalmology module) As a result of the global pandemic an online distance module was devised for students participating in the ophthalmology clinical attachment for the 2019/2020 academic year. As previously described these students (total n = 114) engaged in a curriculum solely dependent on an online flipped classroom (OFC) and for the purposes of the current study functioned as our comparison group . Recruitment period for this study cohort: 19 th October 2020 to 18 th December 2020. Group 2: Blended Learning (BL) group (2020/2021 ophthalmology module) The Blended learning (BL) of the 4 th year ophthalmology clinical attachment began on the 5 th of October 2020 in the Royal Victoria Eye and Ear Hospital (RVEEH) and finished 26 th April 2021. Recruitment period for this study cohort: 5 th April 2021 to 28 th May 2021. Students were assigned into groups by the SARA (Student, Academic & Regulatory Affairs) office RCSI (10–12 in each) before commencing their clinical attachment week. BL students attended on site teaching sessions, remote online FC sessions and in-person patient centred clinical skills teaching sessions. Module description The aims of the ophthalmology module were to enable the students to develop the clinical knowledge and skills to assess any patient presenting with an eye disorder and to formulate an appropriate differential diagnosis and management or referral plan. Our objective was to ensure constructive alignment of the module with the existing learning outcomes despite the change in delivery whilst responding to the changes precipitated the by COVID-19 pandemic regarding social distancing . On site in-person teaching : students attended sessions on the Anatomy of the Eye, History taking in ophthalmology, Patient based teaching and Clinical skills, which each consisted of 60-min small group teaching sessions (face to face lectures with 15-minute question and answer session) led by an Ophthalmologist. Online flipped classroom : students were asked to watch pre-recorded video lectures (Cataract, Glaucoma, Diabetic Retinopathy, AMD) online in advance of a one hour interactive session led by an Ophthalmologist. This was supplemented by slide sets of the didactic lecture material without audio. After the pre-class lecture students attended a synchronous, online, live interactive session on the same topic. These Blackboard Collaborate (BBc) sessions included problem solving, clinical vignettes and MCQs relating to the recorded lecture. Additional BBc sessions covered 3 other key topics including red eye, sudden loss of vision and change in appearance. Facilitators prepared clinical cases and related MCQs that addressed learning outcomes and promoted engagement for use during the interactive online session. The facilitator encouraged problem solving using the poll feature of BBc which encouraged both discussion and active learning. Clinical skills : students attended in person, practical clinical skills sessions which included the examination of the following: Snellen visual acuity, direct ophthalmoscopy, eye movements, pupil reactions, visual fields to confrontation, cover test for strabismus and the external eye examination with a pen torch. Patient centred teaching : students engaged in in-person patient led teaching practical sessions which consisted of taking patient history, reading patient charts, examining patients and a discussion regarding the outcomes of the consultation facilitated by the supervising clinical tutor. Students also attending outpatient clinics as observers, listening to patient histories, examining clinical signs and discussing patient cases with the attending doctors. Knowledge was tested upon completion of the module via a multiple- choice question (MCQ) exam. Clinical Competency (skills) were assessed by practical examination of fundoscopy skills. As a result of the global pandemic an online distance module was devised for students participating in the ophthalmology clinical attachment for the 2019/2020 academic year. As previously described these students (total n = 114) engaged in a curriculum solely dependent on an online flipped classroom (OFC) and for the purposes of the current study functioned as our comparison group . Recruitment period for this study cohort: 19 th October 2020 to 18 th December 2020. The Blended learning (BL) of the 4 th year ophthalmology clinical attachment began on the 5 th of October 2020 in the Royal Victoria Eye and Ear Hospital (RVEEH) and finished 26 th April 2021. Recruitment period for this study cohort: 5 th April 2021 to 28 th May 2021. Students were assigned into groups by the SARA (Student, Academic & Regulatory Affairs) office RCSI (10–12 in each) before commencing their clinical attachment week. BL students attended on site teaching sessions, remote online FC sessions and in-person patient centred clinical skills teaching sessions. The aims of the ophthalmology module were to enable the students to develop the clinical knowledge and skills to assess any patient presenting with an eye disorder and to formulate an appropriate differential diagnosis and management or referral plan. Our objective was to ensure constructive alignment of the module with the existing learning outcomes despite the change in delivery whilst responding to the changes precipitated the by COVID-19 pandemic regarding social distancing . On site in-person teaching : students attended sessions on the Anatomy of the Eye, History taking in ophthalmology, Patient based teaching and Clinical skills, which each consisted of 60-min small group teaching sessions (face to face lectures with 15-minute question and answer session) led by an Ophthalmologist. Online flipped classroom : students were asked to watch pre-recorded video lectures (Cataract, Glaucoma, Diabetic Retinopathy, AMD) online in advance of a one hour interactive session led by an Ophthalmologist. This was supplemented by slide sets of the didactic lecture material without audio. After the pre-class lecture students attended a synchronous, online, live interactive session on the same topic. These Blackboard Collaborate (BBc) sessions included problem solving, clinical vignettes and MCQs relating to the recorded lecture. Additional BBc sessions covered 3 other key topics including red eye, sudden loss of vision and change in appearance. Facilitators prepared clinical cases and related MCQs that addressed learning outcomes and promoted engagement for use during the interactive online session. The facilitator encouraged problem solving using the poll feature of BBc which encouraged both discussion and active learning. Clinical skills : students attended in person, practical clinical skills sessions which included the examination of the following: Snellen visual acuity, direct ophthalmoscopy, eye movements, pupil reactions, visual fields to confrontation, cover test for strabismus and the external eye examination with a pen torch. Patient centred teaching : students engaged in in-person patient led teaching practical sessions which consisted of taking patient history, reading patient charts, examining patients and a discussion regarding the outcomes of the consultation facilitated by the supervising clinical tutor. Students also attending outpatient clinics as observers, listening to patient histories, examining clinical signs and discussing patient cases with the attending doctors. Knowledge was tested upon completion of the module via a multiple- choice question (MCQ) exam. Clinical Competency (skills) were assessed by practical examination of fundoscopy skills. Blackboard Collaborate (BBc) has previously been shown to have utility as a platform to support nursing students placement learning. Several studies have highlighted the importance training to develop students digital literacy to facilitate student engagement with this form of technology . To support this guides to the use of BBc were prepared and provided to the students ahead of the online module. Digital training was provided to ophthalmology faculty along with support guides for the use of the BBc platform. To investigate student perceptions and satisfaction with the online flipped classroom, all students (257) were invited to complete the CEQ36 online via Survey Monkey. Each item of the questionnaire is answered using a standard 5-point Likert scale where the levels of agreement range from “strongly agree” (scoring a “1”) to “strongly disagree” (scoring a “5”). The CEQ36 measures six constructs established as important learning environment features within the context of higher education , and are presented in . In addition to the CEQ36 data, final anonymised MCQ exam scores were obtained for each student in the study. Descriptive statistics were used to describe the characteristics of the two groups (OFC (n = 28) vs BL (n = 59)) and Chi-square test/Fisher exact test, or independent samples t-test used to explore differences between the groups. The scores of the MCQ final exam were compared using independent samples t test. The questionnaire data given to students were analysed using Mann-Whitney-U tests, to explore potential differences between the groups. During analysis responses for the ‘agree’ and ‘strongly agree’ categories were combined, similarly responses for ‘disagree’ and ‘strongly disagree’ category were combined. All statistical analyses were performed in GraphPad Prism V5 or Stata v13. A total of 257 undergraduate medical students who received the BL delivery of the ophthalmology clinical attachment were invited to participate in this study. Of these, 59 students (23%) agreed to take part in the study and completed an online CEQ. A total of 114 students who had received OFC delivery of ophthalmology content the year prior attended online tutorials as described previously . Of these, 28 agreed to participate (25%). The demographic distribution of the participants is presented in . There was no evidence of a difference in gender or age between the OFC and BL groups for the classes as a whole (column 1 v 3), or between students in the OFC group or the BL group who participated in the online surveys (column 2 v 4). Student perceptions graphically summarizes the responses from the students regarding the six constructs established as important learning environment features within the context of higher education: Good Teaching (GT), Generic Skills (GS), Appropriate Assessment (AA), Appropriate Workload (AW), Clear Goals and Standards (CG), Emphasis on Independence (IN) . Overall students indicated a preference for the BL compared to the OFC approach. We observed significant differences between the responses of the OFC and BL groups regarding the learning experience, perceived value of the flipped classroom, teaching process, skill development and the evaluation system outlined in . Due to small number of respondents in some categories, Strongly Agree and Agree, and additionally, Strongly Disagree and Disagree categories were joined for analysis. Furthermore, also relating to the small number respondents, the margin of error varied substantially, ranging from 5.7% to 12.6% for estimates of Agree/ Strongly Agree in the BL group, and 16.7% to 18.5% for estimates of Agree/ Strongly Agree in the OFC group. Good Teaching scale We observed that the BL delivery approach resulted in significantly greater levels of student satisfaction on the GT scale compared to the OFC approach. Specifically the BL group felt that the teaching staff motivated students to do their best (Q4, p<0.001), put a lot of time into commenting on students work (Q9, p = 0.004) and made a real effort to understand difficulties that students may be having with their work (Q20, p = 0.001). However, having adjusted for multiple comparisons Q9 was no longer significant. Furthermore, compared to the OFC group the BL students felt that faculty were extremely good at explaining course content (Q23, p = 0.05) and that they made significant efforts to make the subjects interesting (Q25, p = 0.013). Critically we observed a significant improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group (Q33, p = 0.001). However, having adjusted for multiple comparisons, only improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group remained statistically significantly different (Q33, p = 0.033). Clear Goals and Standards scale There was no evidence of a difference in student perceptions on the goals and standards (CG) scale specifically about what was expected from them (Q18), about the standard of work required (Q1) and faculty expectations of students being made clear (Q35). Overall there was no evidence of a difference in goals and standards after Bonferroni adjustment, however, before adjustment there was some evidence the BL group were significantly more satisfied with the CG compared to the OFC. Specifically, students felt that they had a clear idea of what was going on and what was expected from them (Q8, p = 0.006) and that the aims and objectives of the course were made very clear (Q24, p = 0.027). Generic Skills scale There was no evidence of a difference in student perceptions about the capacity for the OFC or BL course to improve their written communication skills (Q13). In contrast to the OFC course students those who participated in the BL course showed some evidence of a difference prior to Bonferroni adjustment, finding it helped develop their problem skills (Q2, p = 0.001), sharpened their analytical skills (Q6, p = 0.023), developed their ability to work as a team member (Q11, p<0.001), improved their confidence about tackling unfamiliar problems (Q12, p = 0.029) and developed their ability to plan work (Q28, p = 0.003). Developing their problem skills (Q2) and developing their ability to work as a team member (Q11, remained statistically significant following adjustment for multiple comparisons. Appropriate Assessment scale There was no evidence of a difference between the OFC and BL group in student perceptions of the impression that staff are more interested in testing what students have memorised (Q17) or ask too many questions about facts (Q26). Additionally, there was no difference between the OFC and BL group in their perceptions of the form that feedback was given (Q29) or that just by working hard around exam times they could get through the course (Q32). We observed significantly greater levels of student satisfaction among the BL group with the impression that faculty can learn from students (Q7, p = 0.001) compared to the OFC group. Furthermore, the BL group indicated that doing well on the course required more than just a good memory (Q10, p = 0.007). Appropriate Workload scale There was no evidence of a difference in student perceptions in relation to the workload (Q5), the number of topics covered in the syllabus (Q14), the amount of time given to learn (Q19), the pressure felt by students (Q27) or how the volume of work affects comprehension of topics (Q36). Emphasis on Independence scale There was no evidence of a difference in student perceptions between the OFC and BL group on the IN scale regarding opportunities to choose the particular areas you want to study (Q3), that the course encouraged them to pursue their academic interests (Q15) or their opportunities to discuss how they were going to learn with lecturers (Q30). However, we observed that the BL group were significantly more satisfied with elements in the IN scale compared to the OFC group. Specifically, students in the BL group felt they had greater levels of choice regarding how they would learn (Q16, p = 0.005), the work they had to do (Q21, p = 0.031) and the ways in which they were assessed (Q34, p = 0.004). Questions regarding the value of the flipped classroom Previous studies have highlighted questions within the CEQ survey, which provide insights into the perceived value of the flipped classroom . The FC scale questions overlap with the GT and GS scale; specifically questions 2, 4, 5, 11, 12, 13 and 28. Student survey responses indicated a significant level of student satisfaction with the online flipped classroom approach as part of the revised BL curriculum. As mentioned above students in the BL group felt that there were more opportunities to improve their problem-solving skills (Q2, P = 0.01) and that staff did more to motivate them (Q4, p<0.001). In addition, they felt the course helped develop their ability to work as a team member compared to the OFC group (Q11, <0.001), tackle unfamiliar problems (Q12, p = 0.029) and developed their ability to plan their own work (Q28, p = 0.003). When asked to rate the statement “Overall, I am satisfied with the quality of this course” there was no evidence of a difference in the rating between the OFC and the BL group (Q 37). Comparison of overall student performance on final multiple-choice exam Next we compared students’ exam scores before and after the educational intervention for all students in the OFC (n = 114) and BL groups (n = 257) and students in the OFC (n = 28) and BL groups (n = 59) who responded to the survey. Students answered 20 ophthalmology multiple-choice questions (MCQ) as part of completing the course. Each question had the same weight, and the total score was converted into a 0–100 scale. Independent samples t test was used to compare the differences between the two groups. This analysis of the final exam MCQ score showed that there were no statistical differences between the OFC and BL group (p = 0.0560). Comparison of the final exam MCQ score for survey responders between the OFC and BL found no evidence of a statistical difference in the score achieved. Overall, this indicates that the BL did not negatively influence knowledge gain. graphically summarizes the responses from the students regarding the six constructs established as important learning environment features within the context of higher education: Good Teaching (GT), Generic Skills (GS), Appropriate Assessment (AA), Appropriate Workload (AW), Clear Goals and Standards (CG), Emphasis on Independence (IN) . Overall students indicated a preference for the BL compared to the OFC approach. We observed significant differences between the responses of the OFC and BL groups regarding the learning experience, perceived value of the flipped classroom, teaching process, skill development and the evaluation system outlined in . Due to small number of respondents in some categories, Strongly Agree and Agree, and additionally, Strongly Disagree and Disagree categories were joined for analysis. Furthermore, also relating to the small number respondents, the margin of error varied substantially, ranging from 5.7% to 12.6% for estimates of Agree/ Strongly Agree in the BL group, and 16.7% to 18.5% for estimates of Agree/ Strongly Agree in the OFC group. Good Teaching scale We observed that the BL delivery approach resulted in significantly greater levels of student satisfaction on the GT scale compared to the OFC approach. Specifically the BL group felt that the teaching staff motivated students to do their best (Q4, p<0.001), put a lot of time into commenting on students work (Q9, p = 0.004) and made a real effort to understand difficulties that students may be having with their work (Q20, p = 0.001). However, having adjusted for multiple comparisons Q9 was no longer significant. Furthermore, compared to the OFC group the BL students felt that faculty were extremely good at explaining course content (Q23, p = 0.05) and that they made significant efforts to make the subjects interesting (Q25, p = 0.013). Critically we observed a significant improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group (Q33, p = 0.001). However, having adjusted for multiple comparisons, only improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group remained statistically significantly different (Q33, p = 0.033). Clear Goals and Standards scale There was no evidence of a difference in student perceptions on the goals and standards (CG) scale specifically about what was expected from them (Q18), about the standard of work required (Q1) and faculty expectations of students being made clear (Q35). Overall there was no evidence of a difference in goals and standards after Bonferroni adjustment, however, before adjustment there was some evidence the BL group were significantly more satisfied with the CG compared to the OFC. Specifically, students felt that they had a clear idea of what was going on and what was expected from them (Q8, p = 0.006) and that the aims and objectives of the course were made very clear (Q24, p = 0.027). Generic Skills scale There was no evidence of a difference in student perceptions about the capacity for the OFC or BL course to improve their written communication skills (Q13). In contrast to the OFC course students those who participated in the BL course showed some evidence of a difference prior to Bonferroni adjustment, finding it helped develop their problem skills (Q2, p = 0.001), sharpened their analytical skills (Q6, p = 0.023), developed their ability to work as a team member (Q11, p<0.001), improved their confidence about tackling unfamiliar problems (Q12, p = 0.029) and developed their ability to plan work (Q28, p = 0.003). Developing their problem skills (Q2) and developing their ability to work as a team member (Q11, remained statistically significant following adjustment for multiple comparisons. Appropriate Assessment scale There was no evidence of a difference between the OFC and BL group in student perceptions of the impression that staff are more interested in testing what students have memorised (Q17) or ask too many questions about facts (Q26). Additionally, there was no difference between the OFC and BL group in their perceptions of the form that feedback was given (Q29) or that just by working hard around exam times they could get through the course (Q32). We observed significantly greater levels of student satisfaction among the BL group with the impression that faculty can learn from students (Q7, p = 0.001) compared to the OFC group. Furthermore, the BL group indicated that doing well on the course required more than just a good memory (Q10, p = 0.007). Appropriate Workload scale There was no evidence of a difference in student perceptions in relation to the workload (Q5), the number of topics covered in the syllabus (Q14), the amount of time given to learn (Q19), the pressure felt by students (Q27) or how the volume of work affects comprehension of topics (Q36). Emphasis on Independence scale There was no evidence of a difference in student perceptions between the OFC and BL group on the IN scale regarding opportunities to choose the particular areas you want to study (Q3), that the course encouraged them to pursue their academic interests (Q15) or their opportunities to discuss how they were going to learn with lecturers (Q30). However, we observed that the BL group were significantly more satisfied with elements in the IN scale compared to the OFC group. Specifically, students in the BL group felt they had greater levels of choice regarding how they would learn (Q16, p = 0.005), the work they had to do (Q21, p = 0.031) and the ways in which they were assessed (Q34, p = 0.004). Questions regarding the value of the flipped classroom Previous studies have highlighted questions within the CEQ survey, which provide insights into the perceived value of the flipped classroom . The FC scale questions overlap with the GT and GS scale; specifically questions 2, 4, 5, 11, 12, 13 and 28. Student survey responses indicated a significant level of student satisfaction with the online flipped classroom approach as part of the revised BL curriculum. As mentioned above students in the BL group felt that there were more opportunities to improve their problem-solving skills (Q2, P = 0.01) and that staff did more to motivate them (Q4, p<0.001). In addition, they felt the course helped develop their ability to work as a team member compared to the OFC group (Q11, <0.001), tackle unfamiliar problems (Q12, p = 0.029) and developed their ability to plan their own work (Q28, p = 0.003). When asked to rate the statement “Overall, I am satisfied with the quality of this course” there was no evidence of a difference in the rating between the OFC and the BL group (Q 37). Comparison of overall student performance on final multiple-choice exam Next we compared students’ exam scores before and after the educational intervention for all students in the OFC (n = 114) and BL groups (n = 257) and students in the OFC (n = 28) and BL groups (n = 59) who responded to the survey. Students answered 20 ophthalmology multiple-choice questions (MCQ) as part of completing the course. Each question had the same weight, and the total score was converted into a 0–100 scale. Independent samples t test was used to compare the differences between the two groups. This analysis of the final exam MCQ score showed that there were no statistical differences between the OFC and BL group (p = 0.0560). Comparison of the final exam MCQ score for survey responders between the OFC and BL found no evidence of a statistical difference in the score achieved. Overall, this indicates that the BL did not negatively influence knowledge gain. We observed that the BL delivery approach resulted in significantly greater levels of student satisfaction on the GT scale compared to the OFC approach. Specifically the BL group felt that the teaching staff motivated students to do their best (Q4, p<0.001), put a lot of time into commenting on students work (Q9, p = 0.004) and made a real effort to understand difficulties that students may be having with their work (Q20, p = 0.001). However, having adjusted for multiple comparisons Q9 was no longer significant. Furthermore, compared to the OFC group the BL students felt that faculty were extremely good at explaining course content (Q23, p = 0.05) and that they made significant efforts to make the subjects interesting (Q25, p = 0.013). Critically we observed a significant improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group (Q33, p = 0.001). However, having adjusted for multiple comparisons, only improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group remained statistically significantly different (Q33, p = 0.033). There was no evidence of a difference in student perceptions on the goals and standards (CG) scale specifically about what was expected from them (Q18), about the standard of work required (Q1) and faculty expectations of students being made clear (Q35). Overall there was no evidence of a difference in goals and standards after Bonferroni adjustment, however, before adjustment there was some evidence the BL group were significantly more satisfied with the CG compared to the OFC. Specifically, students felt that they had a clear idea of what was going on and what was expected from them (Q8, p = 0.006) and that the aims and objectives of the course were made very clear (Q24, p = 0.027). There was no evidence of a difference in student perceptions about the capacity for the OFC or BL course to improve their written communication skills (Q13). In contrast to the OFC course students those who participated in the BL course showed some evidence of a difference prior to Bonferroni adjustment, finding it helped develop their problem skills (Q2, p = 0.001), sharpened their analytical skills (Q6, p = 0.023), developed their ability to work as a team member (Q11, p<0.001), improved their confidence about tackling unfamiliar problems (Q12, p = 0.029) and developed their ability to plan work (Q28, p = 0.003). Developing their problem skills (Q2) and developing their ability to work as a team member (Q11, remained statistically significant following adjustment for multiple comparisons. There was no evidence of a difference between the OFC and BL group in student perceptions of the impression that staff are more interested in testing what students have memorised (Q17) or ask too many questions about facts (Q26). Additionally, there was no difference between the OFC and BL group in their perceptions of the form that feedback was given (Q29) or that just by working hard around exam times they could get through the course (Q32). We observed significantly greater levels of student satisfaction among the BL group with the impression that faculty can learn from students (Q7, p = 0.001) compared to the OFC group. Furthermore, the BL group indicated that doing well on the course required more than just a good memory (Q10, p = 0.007). There was no evidence of a difference in student perceptions in relation to the workload (Q5), the number of topics covered in the syllabus (Q14), the amount of time given to learn (Q19), the pressure felt by students (Q27) or how the volume of work affects comprehension of topics (Q36). There was no evidence of a difference in student perceptions between the OFC and BL group on the IN scale regarding opportunities to choose the particular areas you want to study (Q3), that the course encouraged them to pursue their academic interests (Q15) or their opportunities to discuss how they were going to learn with lecturers (Q30). However, we observed that the BL group were significantly more satisfied with elements in the IN scale compared to the OFC group. Specifically, students in the BL group felt they had greater levels of choice regarding how they would learn (Q16, p = 0.005), the work they had to do (Q21, p = 0.031) and the ways in which they were assessed (Q34, p = 0.004). Previous studies have highlighted questions within the CEQ survey, which provide insights into the perceived value of the flipped classroom . The FC scale questions overlap with the GT and GS scale; specifically questions 2, 4, 5, 11, 12, 13 and 28. Student survey responses indicated a significant level of student satisfaction with the online flipped classroom approach as part of the revised BL curriculum. As mentioned above students in the BL group felt that there were more opportunities to improve their problem-solving skills (Q2, P = 0.01) and that staff did more to motivate them (Q4, p<0.001). In addition, they felt the course helped develop their ability to work as a team member compared to the OFC group (Q11, <0.001), tackle unfamiliar problems (Q12, p = 0.029) and developed their ability to plan their own work (Q28, p = 0.003). When asked to rate the statement “Overall, I am satisfied with the quality of this course” there was no evidence of a difference in the rating between the OFC and the BL group (Q 37). Next we compared students’ exam scores before and after the educational intervention for all students in the OFC (n = 114) and BL groups (n = 257) and students in the OFC (n = 28) and BL groups (n = 59) who responded to the survey. Students answered 20 ophthalmology multiple-choice questions (MCQ) as part of completing the course. Each question had the same weight, and the total score was converted into a 0–100 scale. Independent samples t test was used to compare the differences between the two groups. This analysis of the final exam MCQ score showed that there were no statistical differences between the OFC and BL group (p = 0.0560). Comparison of the final exam MCQ score for survey responders between the OFC and BL found no evidence of a statistical difference in the score achieved. Overall, this indicates that the BL did not negatively influence knowledge gain. The imperatives of the COVID-19 pandemic mandated an inevitable transition to online and distance learning, addressing the challenges posed by government directives and social distancing requirements. In the aftermath of the COVID-19 pandemic blended learning has become an accepted approach in health professions education . While student safety and wellbeing was paramount, removing medical students from the clinical context to minimise risk associated with the COVID-19 pandemic was not a feasible long-term strategy. BL involves both face-to-face and online learning components and was therefore an advantageous approach to health professions education during the pandemic as it offers the best of both approaches . In this study we wanted to assess student satisfaction with a revised ophthalmology module adopting a BL format, which included online learning and in-person seminars combined with practical patient centred sessions. Our goal was to compare BL with the previous delivery format that relied solely on a remote online flipped classroom to facilitate continued delivery of the ophthalmology module. It was hypothesised that as an educational intervention, the blended learning approach would continue to facilitate delivery of content as well as maintaining or improving levels of student satisfaction and knowledge gain as determined by a CEQ and MCQ examination. Learner satisfaction is a multidimensional construct and is related to an individual’s subjective assessment . Student satisfaction hinges on the efficacy of educational courses and the individual’s enthusiasm and enjoyment in the learning process . Blended learning offers learners more choice in a multimodal delivery of course content, and when compared to traditional deliver, BL has been shown to yield more favourable results in terms of knowledge outcomes when compared to traditional learning in HPE . While the analysis of the final exam MCQ scores in our study showed that there were no statistical differences between the OFC and BL groups, the BL group showed higher satisfaction with the choice provided by the BL approach with how they learn, the work they completed, and the methods of assessment (Emphasis on Independence scale). The practical constraints related to the OFC approach provided the learners fewer freedoms and choice in their educational journey. Chick et al suggested innovative technology including FC could play an essential role bridging the educational gap during the unprecedented COVID-19 pandemic . The FC has been demonstrated to be accessible and user friendly , and was favourable among ophthalmology residents, with reported improvement in test scores . The OFC approach has been utilised by many academics who found it was well received by students and resulted in similar or enhanced knowledge gain in some instances compared to the traditional delivery of teaching . However, our previous study’s findings were in contrast to this literature, and we found significant dissatisfaction with the online flipped classroom approach . Given that our initial rapid response to the challenges of delivering content during the pandemic relied significantly on a remote OFC approach we sought to determine student perceptions with this model. Overall students reported a lack of satisfaction with this model indicating a lack of staff motivation, difficulties determining the standard of work required and lack of development of critical thinking and problem solving as issues with the OFC approach for remote ophthalmology teaching . We believe that a lack of faculty preparedness , digital fatigue and student uncertainty may also have contributed to student dissatisfaction with the OFC approach . In this study compared to the OFC group the BL students felt staff excelled at explaining course content (Q23, p = 0.05) and made significant efforts to make the subjects engaging (Q25, p = 0.013). The variety in the BL approach offers more choice and enables learners to engage with material through various mediums, and this was also reflected in the BL groups’ satisfaction with the choices regarding how they would learn and be assessed. Additionally, the multimodal delivery of course content in BL, appears to have addressed some of the challenges associated with the technological challenges faced by staff in the initial response to the pandemic. Student Satisfaction is also associated with an individual’s interaction with their peers and with faculty , and in a national review in the UK having a “social life and meeting people” was acknowledged as a crucial factor contributing to overall satisfaction . Our resulted demonstrated BL resulted in significantly greater levels of student satisfaction on the Good Teaching scale compared to the OFC approach, specifically relating to items associated with staff motivating students to do their best and to understand student difficulties. We noted markedly higher levels of student satisfaction within the BL group with survey items relating to students feeling motivated by staff (Q4), working as part of a team (Q11), and how the course tried to get the best of students (Q33). HPE involves hands on learning and elements of teamwork and effective communication. Online learning has been associated with poor engagement and during the COVID-19 pandemic, reduced interpersonal interaction , and has also been associated with lower levels of preparedness and a lack of hands-on training . Our revised BL curricula, including online learning with in-person seminars and practical patient centred sessions, improved students self-reported problem solving, analytical skills and ability to work as part of team. Study limitations The interpretations drawn from our investigations should be considered within the context of the limitations inherent to this study. One such limitation is the relatively low level of student engagement observed in these investigations, which subsequently led to suboptimal response rates and smaller sample sizes. This resulted in varying margins of error around estimates, and results should be interpreted with caution. The ongoing global pandemic during the participant recruitment phase represents a factor potentially influencing the lack of study participants. Furthermore, our study encompasses participants from two distinct iterations of clinical attachments spanning across two academic years. It is noteworthy, however, that an analysis of each student cohort, as well as those who actively engaged in the study, determined no statistically significant disparities in terms of characteristics/demographics. The interpretations drawn from our investigations should be considered within the context of the limitations inherent to this study. One such limitation is the relatively low level of student engagement observed in these investigations, which subsequently led to suboptimal response rates and smaller sample sizes. This resulted in varying margins of error around estimates, and results should be interpreted with caution. The ongoing global pandemic during the participant recruitment phase represents a factor potentially influencing the lack of study participants. Furthermore, our study encompasses participants from two distinct iterations of clinical attachments spanning across two academic years. It is noteworthy, however, that an analysis of each student cohort, as well as those who actively engaged in the study, determined no statistically significant disparities in terms of characteristics/demographics. The COVID-19 pandemic compelled an inevitable shift to online and distance learning to address challenges posed by government mandates and social distancing requirements. However, in post-pandemic HPE, it is crucial to assess the effectiveness and learner perceptions of online and distance learning interventions. In line with recent BEME reviews we implemented a revised curriculum which included a blend of traditional classroom-based and remote learning approaches combined with in person practical elements including direct patient contact with mitigated risk. We provided support and training of both faculty and students will help to increase digital proficiency and engagement as online elements are continuing as a central feature of medical education. These changes resulted in significant increases in student satisfaction. Our study revealed a substantial student preference for blended learning (BL) over the online flipped classroom (OFC) approach, with comparable student performances based on MCQ examinations. Importantly, this study presents a unique insight into the repercussions of introducing an educational intervention centred on blended learning amidst the pandemic. This insight focusses on student satisfaction and the enhancement of learning experiences, underlining the distinctive value of our research. These findings indicate a preference for reintegrating in-person and patient engagement activities in post-pandemic health professions education. S1 Appendix Ophthalmology module learning outcomes. (DOCX)
How significant is the radiation exposure during electrophysiology study and ablation procedures for supraventricular tachycardia?
c5d9ede3-94c4-4a03-b439-b0dfac2e8d8a
8065359
Physiology[mh]
Introduction Radiation exposure during conventional electrophysiology and radiofrequency (EP/RFA) procedures has been a reason cited for the increasing use of newer expensive electroanatomic mapping systems. To put this in perspective, we compared the Ionizing Radiation (IR) exposure in conventional EP/RFA procedures for supra-ventricular tachycardia (SVT), with coronary angiography (CAG) performed via the radial route. Method We prospectively analyzed the two-month data (January and February 2020) of IR exposure in all successful SVT ablation procedures and radial CAG. Patients with atrioventricular nodal reentrant tachycardia, accessory pathways and atrial tachycardia were included. Patients with more than one tachycardia mechanism were excluded. In the CAG arm, we excluded patients with i) acute coronary syndrome taken for primary intervention, ii) anomalous coronary origins and iii) prior coronary artery bypass surgery. During CAG, fluoroscopy was at 15 frames per second (FPS) of pulse rate (PR), while EP/RFA was done mostly with 7.5 FPS of PR (during transseptal puncture, it was increased to 15 FPS). All the procedures were done in a floor mounted catheterisation laboratory (Artis Zee, Siemens). We collected the data regarding air kinetic energy release in matter (Kerma), measured in milli-gray (mGy), dose area product (DAP) measured in cGy.cm, total cine exposures and the fluoroscopy time, measured in minutes. These we compared among the two groups using the independent ‘ t ’ test. Results Altogether 55 patients with CAG and 45 patients with EP/RFA were found eligible for the study. All procedures were performed with conventional mapping. The age of the CAG group was 57.8 ± 11 years, with male/female distribution of 37/18; in the EP/RFA group the age was 42 years ± 15.2 years, with male/female distribution of 22/23. The diagnoses were atrioventricular nodal re-entrant tachycardia (23, 51.1%) [amongst which 2 were atypical and rest typical], accessory pathways (18, 40.0%) [amongst which 9 were right sided pathways, 7 left sided pathways, 1 of coronary sinus diverticulum and 1 of anteroseptal pathway], and atrial tachycardia (4, 8.9%) [amongst which 2 were left atrial tachycardias, 1 was ablated from non-coronary sinus of aorta and 1 was ablated from upper septum]. Two left atrial tachycardias and 3 left-sided pathways required septal punctures. No jugular puncture was needed. All procedures were successful. The details of IR exposure are detailed in . As evident, the Air Kerma was much less in EP/RFA as compared to CAG (249.1 ± 267 mGy v/s 671.9 ± 328.6 mGy, p < 0.001); as was DAP (1747.7 ± 2309 cGy cm v/s 3373.3 ± 1800.4 cGy cm p < 0.001). The total number of cine exposures were also much less in EP/RFA as compared to CAG (3.71 ± 4.1 v/s 9.55 ± 2.44, p < 0.001). The fluoroscopy time was higher in EP/RFA as compared to CAG (13.4 ± 10.6 min v/s 3.6 ± 2.8 min, p < 0.001). A Pearson product–moment correlation was run to determine the relationship between fluoroscopy time and Air Kerma in the EP/RFA group. There was a strong, positive correlation between fluoroscopy time and Air Kerma, which was statistically significant ( r = .682, n = 45, p < .001). A linear regression established that fluoroscopy time statistically significantly predict Air Kerma, F (1, 43) = 37.47, p = .0001 and fluoroscopy time accounted for 46.6% of the explained variability in Air Kerma. The regression equation was predicted Air Kerma = 19.15 + 17.17 x (fluoroscopy time) . According to this equation around 38 min (which is three times the mean fluoroscopy time of an EP/RFA case) of fluoroscopy time would be required in an EP/RFA case to equalize the mean radiation exposure of a CAG. Discussion Standard studies post 2010 show an average Air Kerma in the range of 500–600 mGy, average DAP around 3000–4000 cGy cm 2 and average fluoroscopy time in the range 3–8 min for CAG. , Studies on EP/RFA of SVT show Air Kerma in the range of 200–300 mGy, an average DAP of around 2000 cGy cm and average fluoroscopy time of 12–15 min. , Our study gives us similar findings in the two categories, and is unique in comparing the IR between CAG and EP/RFA in the same center. We found that a conventional EP/RFA procedure for an SVT can be done in much lesser IR exposure than a CAG procedure. The major factors for this are i) the requirement for a higher digital PR during CAG and ii) The negligible need for cine-imaging during EP/RFA procedures. Hence, despite a three-fold longer fluoroscopy time, the total IR for EP/RFA was just around 40% of that during CAG procedures. Next-generation operators may use low/zero-fluoroscopy techniques for the standard procedures included in this study. However, in addition to the financial burden, this has to match or better the excellent long-term safety record of AVNRT ablation using conventional mapping. Conclusion The radiation exposure during conventional EP/RFA procedures for SVT was modest, far less than that for a diagnostic CAG done via the radial route. None. None.
Implementation of Brief Submaximal Cardiopulmonary Testing in a High-Volume Presurgical Evaluation Clinic: Feasibility Cohort Study
78e770e6-9e23-4d5c-9766-5c82a82b2eee
11888076
Surgical Procedures, Operative[mh]
Background Assessment of functional capacity or exercise tolerance, as measured by self-reported metabolic equivalents (METs), remains a cornerstone of preoperative risk stratification. METs are defined as multiples of the basal metabolic rate (1 MET=3.5 mL kg –1 min –1 ), and self-reported ability to climb 1 flight of stairs has a general consensus of 4 METs . A threshold of ≤4.6 METs (self-reported inability to climb 2 flights of stairs) has been associated with major adverse cardiac events, all-cause mortality, and increased perioperative complications . Despite its importance, published reports have cast doubt on the accuracy of provider-driven and self-reported assessment of functional capacity . Thus, reliable and efficient methods to precisely characterize functional capacity continue to be of importance in preoperative risk stratification. Cardiopulmonary exercise testing (CPET) precisely characterizes exercise tolerance by analyzing cellular respiration at rest and during exercise challenges. By measuring resting gas exchange followed by maximal exercise to expose pathophysiological impairments, CPET exploits a symptom-limited approach with a 3-minute resting stage, 3 minutes of unloaded cycling, and a 10- to 12-minute ramp stage with increasing resistance until terminated by the participant . Abnormal CPET measures have been frequently associated with perioperative morbidity, with a peak oxygen uptake (VO 2 ) of <15 mL kg –1 min –1 reported as a threshold for elevated cardiopulmonary risk after thoracic and major noncardiac surgery . In addition, peak VO 2 impairment predicts an increased risk of surgical site infection, postoperative respiratory failure, and critical care readmission . However, CPET has not been widely adopted in preoperative testing, likely due to limited availability, required technical skills, necessity of maximal patient effort, complexity of task, and cost. Yet, conventional preoperative care, usually comprised of subjective or structured, survey-based, clinician estimation of preoperative functional capacity, has demonstrated poor sensitivity in the identification of patients with low functional capacity (≤4 METs), when compared to CPET . In contrast to a conventional symptom-limited approach, submaximal cardiopulmonary exercise testing (smCPET) uses a time-limited approach and predictive analytics to provide estimates of peak cardiopulmonary performance . A maximal exercise effort is not required since it analyzes the VO 2 efficiency slope to predict peak cardiopulmonary performance . Of note, the VO 2 efficiency slope has a strong correlation with peak VO 2 ( r =0.941), permitting effort-independent prediction of conventional CPET measures . Brief smCPET has demonstrated diagnostic utility in predicting postoperative length of stay, complications, and prognosis in heart failure, pulmonary hypertension, and other conditions . Objectives These advantages suggest that time-limited smCPET may be useful for rapid preoperative assessment of exercise tolerance. Therefore, the primary objective was to determine the logistic feasibility of smCPET integration within a high-volume presurgical evaluation clinic. Our measured feasibility end points were (1) operational efficiency, based on the experimental session length being <20 minutes; (2) modified Borg survey of perceived exertion, with a score of ≤7 indicating no more than moderate exertion; (3) high participant satisfaction with smCPET task execution, with a score of >8 (out of 10); and (4) high patient satisfaction with smCPET scheduling, with a score of >8 (of 10). Our secondary objective was to determine if comparable smCPET measures were significantly different from structured survey findings. The secondary end points were a comparison of (1) self-reported subjective METs from a survey versus smCPET equivalents (extrapolated peak METs), (2) Duke Activity Status Index (DASI) estimates versus smCPET equivalents (extrapolated peak METs), and (3) estimated DASI maximal oxygen consumption (estimated peak VO 2 ) versus smCPET equivalents (extrapolated peak VO 2 ). This study hypothesized that brief smCPET would achieve two objectives: first, meet feasibility end points indicating successful implementation, and second, similar to prior published reports regarding provider-driven functional capacity assessments, identify lower peak METs and VO 2 , when compared to structured surveys. Assessment of functional capacity or exercise tolerance, as measured by self-reported metabolic equivalents (METs), remains a cornerstone of preoperative risk stratification. METs are defined as multiples of the basal metabolic rate (1 MET=3.5 mL kg –1 min –1 ), and self-reported ability to climb 1 flight of stairs has a general consensus of 4 METs . A threshold of ≤4.6 METs (self-reported inability to climb 2 flights of stairs) has been associated with major adverse cardiac events, all-cause mortality, and increased perioperative complications . Despite its importance, published reports have cast doubt on the accuracy of provider-driven and self-reported assessment of functional capacity . Thus, reliable and efficient methods to precisely characterize functional capacity continue to be of importance in preoperative risk stratification. Cardiopulmonary exercise testing (CPET) precisely characterizes exercise tolerance by analyzing cellular respiration at rest and during exercise challenges. By measuring resting gas exchange followed by maximal exercise to expose pathophysiological impairments, CPET exploits a symptom-limited approach with a 3-minute resting stage, 3 minutes of unloaded cycling, and a 10- to 12-minute ramp stage with increasing resistance until terminated by the participant . Abnormal CPET measures have been frequently associated with perioperative morbidity, with a peak oxygen uptake (VO 2 ) of <15 mL kg –1 min –1 reported as a threshold for elevated cardiopulmonary risk after thoracic and major noncardiac surgery . In addition, peak VO 2 impairment predicts an increased risk of surgical site infection, postoperative respiratory failure, and critical care readmission . However, CPET has not been widely adopted in preoperative testing, likely due to limited availability, required technical skills, necessity of maximal patient effort, complexity of task, and cost. Yet, conventional preoperative care, usually comprised of subjective or structured, survey-based, clinician estimation of preoperative functional capacity, has demonstrated poor sensitivity in the identification of patients with low functional capacity (≤4 METs), when compared to CPET . In contrast to a conventional symptom-limited approach, submaximal cardiopulmonary exercise testing (smCPET) uses a time-limited approach and predictive analytics to provide estimates of peak cardiopulmonary performance . A maximal exercise effort is not required since it analyzes the VO 2 efficiency slope to predict peak cardiopulmonary performance . Of note, the VO 2 efficiency slope has a strong correlation with peak VO 2 ( r =0.941), permitting effort-independent prediction of conventional CPET measures . Brief smCPET has demonstrated diagnostic utility in predicting postoperative length of stay, complications, and prognosis in heart failure, pulmonary hypertension, and other conditions . These advantages suggest that time-limited smCPET may be useful for rapid preoperative assessment of exercise tolerance. Therefore, the primary objective was to determine the logistic feasibility of smCPET integration within a high-volume presurgical evaluation clinic. Our measured feasibility end points were (1) operational efficiency, based on the experimental session length being <20 minutes; (2) modified Borg survey of perceived exertion, with a score of ≤7 indicating no more than moderate exertion; (3) high participant satisfaction with smCPET task execution, with a score of >8 (out of 10); and (4) high patient satisfaction with smCPET scheduling, with a score of >8 (of 10). Our secondary objective was to determine if comparable smCPET measures were significantly different from structured survey findings. The secondary end points were a comparison of (1) self-reported subjective METs from a survey versus smCPET equivalents (extrapolated peak METs), (2) Duke Activity Status Index (DASI) estimates versus smCPET equivalents (extrapolated peak METs), and (3) estimated DASI maximal oxygen consumption (estimated peak VO 2 ) versus smCPET equivalents (extrapolated peak VO 2 ). This study hypothesized that brief smCPET would achieve two objectives: first, meet feasibility end points indicating successful implementation, and second, similar to prior published reports regarding provider-driven functional capacity assessments, identify lower peak METs and VO 2 , when compared to structured surveys. Trial Design This is an ongoing prospective open-label clinical device study approved by the Yale University Institutional Review Board (IRB#2000033885; ClinicalTrials.gov: #NCT05743673 ; principal investigator: ZJC; date of registration: December 5, 2023). This clinical trial was registered prior to participant enrollment. Study Population Inclusion criteria for study enrollment included age of 60 years and older, a Revised Cardiac Risk Index (RCRI) of ≤2, self-endorsed subjective METs of ≥4 (endorses reliably climbing 2 flights of stairs), and presenting for noncardiac surgery. The aim was to recruit 40 participants for the feThis number was estimated to be adequate to identify any study-related logistic process problems or patient-centered outcome deficiencies and to determine the operational efficiency of this novel system process. The RCRI≤2 criterion was selected given the novelty of smCPET in preoperative evaluation. Given that participants were screened prior to surgical procedures, exclusion criteria were adapted to maintain current standard-of-care practices in preoperative evaluation, which includes mandatory subspecialty evaluation of select cardiopulmonary conditions. Participants with recorded severe or critical heart valve disease, active exertional angina, nonambulation, gait abnormalities, end-stage renal disease, severe peripheral vascular disease, and neurological motor deficits were excluded. Additionally, non–English-speaking participants, those under legal guardianship, and participants documented to not have personal health care decision-making capacity were also excluded. After prescreening, a phone call was placed by a study team member, and eligible participants were invited for in-person written informed consent, preoperative evaluation, questionnaire assessment of METs, and a 6-minute smCPET experimental session. Testing Environment Testing was performed at the presurgical evaluation (PSE) clinic at Yale New Haven Hospital, which is responsible for more than 40,000 preoperative evaluations per year. On a daily basis, the PSE clinic is staffed by an anesthesiologist, 2 resident physicians, 3 certified nurse practitioners, and 6 nursing staff and contains 6 exam rooms. Study Apparatus The US Food and Drug Administration–approved Shape II is a compact, cardiopulmonary, breath-by-breath, exercise testing system that uses brief submaximal exercise effort (3 minutes) to generate multiple quantitative measures of actual and predicted peak cardiopulmonary performance measurements . Predicted peak exercise values are automatically calculated by the device using oxygen efficiency slope equations . Furthermore, the device has been previously validated to conventional CPET measurements . The compact design allows all the necessary equipment to be placed on a standard rolling cart and was deployed in a PSE clinic examination room (2.4 × 2.4 m). A stairstep (14-cm height) was used for the graded exercise portion. The graded exercise was performed with a device prompt (“begin exercise”), with auditory prompts at 1-minute intervals to increase step frequency if possible. A metronome is used to provide cadence. The device provides an option for either timed or symptom-limited assessment. The timed session was selected for all participants. The timed device session requires a total of 6 minutes: 2 minutes of seated baseline resting data, 3 minutes of escalating exercise using the stairstep, and 1 minute of seated recovery data to generate a variety of individual measures of cardiac and pulmonary physiological data . Data Collection Participants received height, weight, and vital sign measurements (heart rate, blood pressure, and pulse oximetry). Informed written consent was performed, and participants were instructed on smCPET (approximately 5 minutes). Session time was measured from the beginning of pretest METs questionnaires until the termination of the smCPET recovery phase. A session time of ≤20 minutes would indicate that 24 high-risk participants could be screened per day per machine, permitting high-volume assessment. Session components included (1) a 7-question subjective METs assessment, (2) a 12-question DASI survey, and (3) a timed smCPET (6 minutes). The modified Borg survey of perceived exertion was performed at session termination. After study interventions, a standard preoperative evaluation was completed, and the participant was discharged. A 24-hour postexperiment survey of minor and major complications and patient satisfaction was performed by telephone . With the exception of the patient satisfaction survey, all survey instruments were adapted from prior publications . DASI-estimated peak METs and peak VO 2 were calculated from individual participants’ DASI scores using the recommended formula. Statistical Analysis End points were reported as continuous variables, described as mean (SD); ordinal variables, as median (IQR and range); and categorical variables, as number . Secondary end points were first analyzed using the Student t test (2-tailed) to compare differences in comparable measurements. Agreement between structured survey findings and smCPET comparable measurements was assessed using 2 approaches. First, a Pearson correlation coefficient was calculated to evaluate the strength and direction of the linear relationship, followed by a Bland-Altman analysis to assess agreement between methods, where differences between paired measurements were plotted against their means. Mean difference (MD) and 95% limits of agreement (LOAs) were calculated. All analyses were carried out on R (version 4.1.1; R Foundation for Statistical Computing). To reduce the introduction of bias, a complete case analysis for missing data was performed, where participants with missing data were excluded from the analysis of the respective end point. Similarly, dropouts were removed from the analysis. A P value of <.05 was accepted for significance. Ethical Considerations This study was performed in accordance with the principles of the Declaration of Helsinki. Approval was granted by the Yale University Institutional Review Board (IRB#2000033885). Informed consent was obtained from all participants included in the study. All provided data were deidentified prior to analysis to maintain participant privacy. No monetary compensation was provided to the participants. JF has given express written informed consent for the publication of his image in . This is an ongoing prospective open-label clinical device study approved by the Yale University Institutional Review Board (IRB#2000033885; ClinicalTrials.gov: #NCT05743673 ; principal investigator: ZJC; date of registration: December 5, 2023). This clinical trial was registered prior to participant enrollment. Inclusion criteria for study enrollment included age of 60 years and older, a Revised Cardiac Risk Index (RCRI) of ≤2, self-endorsed subjective METs of ≥4 (endorses reliably climbing 2 flights of stairs), and presenting for noncardiac surgery. The aim was to recruit 40 participants for the feThis number was estimated to be adequate to identify any study-related logistic process problems or patient-centered outcome deficiencies and to determine the operational efficiency of this novel system process. The RCRI≤2 criterion was selected given the novelty of smCPET in preoperative evaluation. Given that participants were screened prior to surgical procedures, exclusion criteria were adapted to maintain current standard-of-care practices in preoperative evaluation, which includes mandatory subspecialty evaluation of select cardiopulmonary conditions. Participants with recorded severe or critical heart valve disease, active exertional angina, nonambulation, gait abnormalities, end-stage renal disease, severe peripheral vascular disease, and neurological motor deficits were excluded. Additionally, non–English-speaking participants, those under legal guardianship, and participants documented to not have personal health care decision-making capacity were also excluded. After prescreening, a phone call was placed by a study team member, and eligible participants were invited for in-person written informed consent, preoperative evaluation, questionnaire assessment of METs, and a 6-minute smCPET experimental session. Testing was performed at the presurgical evaluation (PSE) clinic at Yale New Haven Hospital, which is responsible for more than 40,000 preoperative evaluations per year. On a daily basis, the PSE clinic is staffed by an anesthesiologist, 2 resident physicians, 3 certified nurse practitioners, and 6 nursing staff and contains 6 exam rooms. The US Food and Drug Administration–approved Shape II is a compact, cardiopulmonary, breath-by-breath, exercise testing system that uses brief submaximal exercise effort (3 minutes) to generate multiple quantitative measures of actual and predicted peak cardiopulmonary performance measurements . Predicted peak exercise values are automatically calculated by the device using oxygen efficiency slope equations . Furthermore, the device has been previously validated to conventional CPET measurements . The compact design allows all the necessary equipment to be placed on a standard rolling cart and was deployed in a PSE clinic examination room (2.4 × 2.4 m). A stairstep (14-cm height) was used for the graded exercise portion. The graded exercise was performed with a device prompt (“begin exercise”), with auditory prompts at 1-minute intervals to increase step frequency if possible. A metronome is used to provide cadence. The device provides an option for either timed or symptom-limited assessment. The timed session was selected for all participants. The timed device session requires a total of 6 minutes: 2 minutes of seated baseline resting data, 3 minutes of escalating exercise using the stairstep, and 1 minute of seated recovery data to generate a variety of individual measures of cardiac and pulmonary physiological data . Participants received height, weight, and vital sign measurements (heart rate, blood pressure, and pulse oximetry). Informed written consent was performed, and participants were instructed on smCPET (approximately 5 minutes). Session time was measured from the beginning of pretest METs questionnaires until the termination of the smCPET recovery phase. A session time of ≤20 minutes would indicate that 24 high-risk participants could be screened per day per machine, permitting high-volume assessment. Session components included (1) a 7-question subjective METs assessment, (2) a 12-question DASI survey, and (3) a timed smCPET (6 minutes). The modified Borg survey of perceived exertion was performed at session termination. After study interventions, a standard preoperative evaluation was completed, and the participant was discharged. A 24-hour postexperiment survey of minor and major complications and patient satisfaction was performed by telephone . With the exception of the patient satisfaction survey, all survey instruments were adapted from prior publications . DASI-estimated peak METs and peak VO 2 were calculated from individual participants’ DASI scores using the recommended formula. End points were reported as continuous variables, described as mean (SD); ordinal variables, as median (IQR and range); and categorical variables, as number . Secondary end points were first analyzed using the Student t test (2-tailed) to compare differences in comparable measurements. Agreement between structured survey findings and smCPET comparable measurements was assessed using 2 approaches. First, a Pearson correlation coefficient was calculated to evaluate the strength and direction of the linear relationship, followed by a Bland-Altman analysis to assess agreement between methods, where differences between paired measurements were plotted against their means. Mean difference (MD) and 95% limits of agreement (LOAs) were calculated. All analyses were carried out on R (version 4.1.1; R Foundation for Statistical Computing). To reduce the introduction of bias, a complete case analysis for missing data was performed, where participants with missing data were excluded from the analysis of the respective end point. Similarly, dropouts were removed from the analysis. A P value of <.05 was accepted for significance. This study was performed in accordance with the principles of the Declaration of Helsinki. Approval was granted by the Yale University Institutional Review Board (IRB#2000033885). Informed consent was obtained from all participants included in the study. All provided data were deidentified prior to analysis to maintain participant privacy. No monetary compensation was provided to the participants. JF has given express written informed consent for the publication of his image in . Participant Recruitment We identified 209 (61.6%) out of 339 potential participants that met eligibility criteria; 6 (1.8%) did not meet the inclusion criteria, 59 (17.4%) failed the prescreening criteria, and 98 (28.9%) declined study participation . Initially, 46 participants were enrolled but 3 (7%) were excluded (operator error: n=2; surgery cancellation: n=1), for a final cohort of 43 participants. Baseline Characteristics Trial participants had a median age of 68 (IQR 66-73, range: 60-86 years), 20 (47%) of 43 were female, and the mean BMI was 27.5 (SD 6.0) kg/m 2 . Preoperative RCRI score was a median of 1 (IQR 1-1; range 1-2). Essential hypertension (22/43, 51%), hyperlipidemia (17/43, 39%), and solid tumor (25/43, 58%) were the most common premorbid conditions. A total of 22 (51%) out of 43 participants were former or active smokers. Major abdominal surgeries comprised 27 (63%) out of the 43 surgical procedures . All participants completed the smCPET session components. The mean peak respiratory exchange ratio was 0.88 (SD 0.12), consistent with submaximal effort (respiratory exchange ratio<1.05). The ventilatory threshold was achieved in 22 (51%) of 43 participants (mean 227.9, SD 21.9 seconds in those that achieved ventilatory threshold). Primary End Points The mean experimental session time was 16.9 (SD 6.8) minutes. The modified Borg survey score after experimental sessions was mean 5.35 (SD 1.8), corresponding to moderate perceived exertion. All 43 participants were reached for the 24-hour postexperiment survey. The median patient satisfaction (on a scale of 1=worst to 10=best) was 10 (IQR 10-10) for scheduling and 10 (IQR 9-10) for task execution. No major or minor complications associated with study testing were reported by participants. Operational efficiency was achieved within 15 experimental sessions among 4 study team members (3 physicians and 1 undergraduate researcher). Secondary End Points Average self-reported peak METs were higher when compared to smCPET equivalents (extrapolated peak METs; mean 7.6, SD 2.0 vs mean 6.7, SD 1.8; t 42 =2.1; P <.001). DASI-estimated peak METs were higher when compared to the smCPET equivalents (extrapolated peak METs; mean 8.8, SD 1.2 vs mean 6.7, SD 1.8; t 42 =7.2; P <.001). DASI-estimated peak VO 2 was higher than the smCPET equivalent (extrapolated peak VO 2 ; mean 30.9, SD 4.3 mL kg –1 min –1 vs mean 23.6, SD 6.5 mL kg –1 min –1 ; t 42 =2.1; P <.001). provides a comparison of values obtained from smCPET compared to structured survey–estimated peak METs and DASI-estimated peak METs. To analyze the congruency between the 3 study instruments, correlation and Bland-Altman analyses were performed. DASI-estimated METs showed a moderate positive correlation versus subjective METs ( r =0.63; P <.001). Weaker correlations were observed with smCPET-derived extrapolated peak METs versus DASI and subjective METs ( r =0.29; P =.06 and r =0.144; P =.36, respectively). DASI versus subjective METs showed an MD of 1.1 (SD 1.49; 95% LOAs –1.82 to 4.02) METs, while DASI versus smCPET-derived extrapolated peak METs showed larger discrepancies with an MD of 2.07 (SD 1.86; 95% LOAs –1.58 to 5.73) METs. The comparison between subjective METs and smCPET-derived extrapolated peak METs showed intermediate systematic bias with the widest LOAs (MD 0.97, SD 2.43 METs; 95% LOAs –3.80 to 5.75). When comparing DASI and smCPET-derived extrapolated peak VO 2 values, a positive MD was observed, indicating that DASI estimates were consistently higher (MD 7.23, SD 6.54 mL kg –1 min –1 ; 95% LOA –8.11 to 21.12) and showed poor agreement ( r =0.28; ). We identified 209 (61.6%) out of 339 potential participants that met eligibility criteria; 6 (1.8%) did not meet the inclusion criteria, 59 (17.4%) failed the prescreening criteria, and 98 (28.9%) declined study participation . Initially, 46 participants were enrolled but 3 (7%) were excluded (operator error: n=2; surgery cancellation: n=1), for a final cohort of 43 participants. Trial participants had a median age of 68 (IQR 66-73, range: 60-86 years), 20 (47%) of 43 were female, and the mean BMI was 27.5 (SD 6.0) kg/m 2 . Preoperative RCRI score was a median of 1 (IQR 1-1; range 1-2). Essential hypertension (22/43, 51%), hyperlipidemia (17/43, 39%), and solid tumor (25/43, 58%) were the most common premorbid conditions. A total of 22 (51%) out of 43 participants were former or active smokers. Major abdominal surgeries comprised 27 (63%) out of the 43 surgical procedures . All participants completed the smCPET session components. The mean peak respiratory exchange ratio was 0.88 (SD 0.12), consistent with submaximal effort (respiratory exchange ratio<1.05). The ventilatory threshold was achieved in 22 (51%) of 43 participants (mean 227.9, SD 21.9 seconds in those that achieved ventilatory threshold). The mean experimental session time was 16.9 (SD 6.8) minutes. The modified Borg survey score after experimental sessions was mean 5.35 (SD 1.8), corresponding to moderate perceived exertion. All 43 participants were reached for the 24-hour postexperiment survey. The median patient satisfaction (on a scale of 1=worst to 10=best) was 10 (IQR 10-10) for scheduling and 10 (IQR 9-10) for task execution. No major or minor complications associated with study testing were reported by participants. Operational efficiency was achieved within 15 experimental sessions among 4 study team members (3 physicians and 1 undergraduate researcher). Average self-reported peak METs were higher when compared to smCPET equivalents (extrapolated peak METs; mean 7.6, SD 2.0 vs mean 6.7, SD 1.8; t 42 =2.1; P <.001). DASI-estimated peak METs were higher when compared to the smCPET equivalents (extrapolated peak METs; mean 8.8, SD 1.2 vs mean 6.7, SD 1.8; t 42 =7.2; P <.001). DASI-estimated peak VO 2 was higher than the smCPET equivalent (extrapolated peak VO 2 ; mean 30.9, SD 4.3 mL kg –1 min –1 vs mean 23.6, SD 6.5 mL kg –1 min –1 ; t 42 =2.1; P <.001). provides a comparison of values obtained from smCPET compared to structured survey–estimated peak METs and DASI-estimated peak METs. To analyze the congruency between the 3 study instruments, correlation and Bland-Altman analyses were performed. DASI-estimated METs showed a moderate positive correlation versus subjective METs ( r =0.63; P <.001). Weaker correlations were observed with smCPET-derived extrapolated peak METs versus DASI and subjective METs ( r =0.29; P =.06 and r =0.144; P =.36, respectively). DASI versus subjective METs showed an MD of 1.1 (SD 1.49; 95% LOAs –1.82 to 4.02) METs, while DASI versus smCPET-derived extrapolated peak METs showed larger discrepancies with an MD of 2.07 (SD 1.86; 95% LOAs –1.58 to 5.73) METs. The comparison between subjective METs and smCPET-derived extrapolated peak METs showed intermediate systematic bias with the widest LOAs (MD 0.97, SD 2.43 METs; 95% LOAs –3.80 to 5.75). When comparing DASI and smCPET-derived extrapolated peak VO 2 values, a positive MD was observed, indicating that DASI estimates were consistently higher (MD 7.23, SD 6.54 mL kg –1 min –1 ; 95% LOA –8.11 to 21.12) and showed poor agreement ( r =0.28; ). Principal Findings Integration of brief smCPET in a high-volume PSE clinic was feasible as measured by the primary end points of session time, patient satisfaction with smCPET task execution, perceived exertion, and session scheduling. The operational efficiency of study team members was acceptable within 15 experimental sessions. Finally, smCPET measures of peak METs and VO 2 were significantly lower, when compared to comparable structured survey results. Mean session time, which included the subjective METs survey, DASI, and 6-minute smCPET session, was 16.9 (SD 6.8) minutes, with progressive improvement over the study time period as operators (n=4) became facile with the study instrument . It is important to note that smCPET comprised 6 minutes of the session time, shorter than reported times with conventional CPET (15-20 min/session) . In high-volume PSE, this may be advantageous, as patients are often seen on short notice for preoperative evaluation. Participants were able to flexibly arrange smCPET around other clinic appointments, decreasing study participants’ time constraints. This likely enhanced our high satisfaction score for scheduling. High patient satisfaction was observed with task execution and perceived exertion during smCPET. The tested device uses a stationary stairstep for graded exercise, which was frequently familiar to participants. The short duration of graded exercise (3 minutes) was not perceived by any participant as maximum exertion by the Borg survey, likely contributing to the high level of patient satisfaction. Second, the Borg score of <7 after smCPET suggests a reasonable probability of success when transitioning its use to patients with more severe comorbidities, or preoperative deconditioning. It is important to note that the ventilatory threshold, or anaerobic threshold, was not measurable in 50% of our cohort, suggesting that the brief graded exercise contributed to the reported exertion level and high participant satisfaction. One of the goals of smCPET is to make precise cardiopulmonary evaluation more widely available and patient centered, advantages that are acknowledged by its increasing adoption in the routine assessment of heart failure and pulmonary hypertension. Consistent with large-scale CPET application in cardiovascular clinical trials, smCPET did not result in findings of major or minor complications despite encouraging participants to safely provide their best effort within the timed and graded exercise component . This is reassuring, as early termination of preoperative CPET trials, due to participant fatigue, safety, or other considerations, has been reported to be approximately 11% . However, we purposefully selected functionally independent participants with self-reported ≥4.6 METs, and expansion to patients who are less functionally independent may result in higher smCPET session failure rates. Regardless, the safety of smCPET has been suggested by its routine application to high-risk and frail populations with severe cardiopulmonary disease, suggesting that a wide spectrum of preoperative populations can be safely tested using smCPET . The structured survey estimated METs were, on average, significantly higher than their smCPET equivalents. Using the subjective METs structured survey, 8 (19%) of 43 participants reported peak METs within 10% of smCPET extrapolated peak METs, 12 (28%) were underestimated by >10%, and 23 (53%) were overestimated by >10%, when compared to smCPET values. Brief smCPET identified that 8 (19%) out of 43 study participants had ≤4.6 extrapolated peak METs (peak VO 2 equivalent: 14 mL kg –1 min –1 ), corresponding to a METs threshold associated with higher perioperative cardiovascular risk . Furthermore, smCPET identified 9 (21%) out of 43 participants with an age-adjusted peak VO 2 of less than 20 mL kg –1 min –1 , corresponding to poor aerobic capacity, and 2 (5%) with an extrapolated peak VO 2 less than 15 mL kg –1 min –1 , a measure frequently associated with higher perioperative risk . These findings support prior descriptions of provider-driven and structured survey overestimation bias, highlighting the challenge of obtaining an accurate preoperative functional capacity assessment. Clinicians, when compared to CPET, had a 19.2% sensitivity in identifying low functional capacity (≤4 METs) . Other investigations have also observed that preanesthesia evaluation calculation of self-reported METs overestimate functional capacity when compared to CPET assessment . DASI was also found to poorly predict participants with lower peak VO 2 . In a cohort of participants that would not necessarily receive extensive preoperative assessment, given that 100% reported the ability to reliably climb 2 flights of stairs, this may suggest opportunities to identify and preemptively optimize unexpected cardiopulmonary impairments prior to surgical intervention. Worldwide, value-based health care has been a significant priority, and conventional preoperative evaluation may increase overall testing costs without improving perioperative outcomes . Implementing brief smCPET for individualized preoperative cardiovascular evaluation may improve the precision of preoperative cardiovascular risk assessment and may potentially curb excess preoperative cardiovascular testing commonly associated with older age and patients with higher comorbidities . However widespread adoption of this technology in the perioperative space will require (1) further evidence of smCPET predictive validity for perioperative outcomes, (2) characterization of optimal system processes for patient selection, and (3) justification of cost-benefit. Study Limitations Several study limitations limit generalizability to other populations. Selection bias should be acknowledged given that participants who volunteered for the study are likely to be more health-conscious than usual patients who undergo PSE. A measurement bias may be introduced into the study given that researchers may unconsciously influence participant performance on smCPET or interpret results differently based on unconscious expectations. Similarly, a recall bias is often introduced when using structured, interview-style questionnaires such as those used in our study. Instrument bias may similarly impact smCPET findings; however, this is substantially reduced by routine device calibration. Confounding factors are similar, where participants with higher fitness levels would find it easier to adapt to the stairstep exercise challenge. Our inclusion criteria purposely selected participants with lower comorbidities to ensure successful participation rates for this feasibility study. We acknowledge that certain premorbid conditions and chronic medication usage may influence smCPET participants’ performance, but we did not balance this factor in this exploratory study. Although CPET and smCPET predictive performance with cardiovascular perioperative morbidity and mortality has been previously published, our cohort is not yet powered for the assessment of perioperative outcomes with this device . Finally, the finding of no device-related adverse events should be cautiously interpreted given the small sample size and the possibility of rare exercise-induced adverse events. Conclusions In summary, we observed that smCPET implementation was well accepted into the workflow of a high-volume PSE clinic. Operator efficiency with the smCPET instrument was rapid and achieved relative parity at 15 participant sessions. smCPET, when compared to usual session times for conventional CPET of 15-20 minutes, uses less than half the time (6 minutes), making it attractive for the purposes of precise but time-efficient preoperative evaluation of exercise tolerance. This feasibility analysis has (1) reinforced the operational integrity of our active study protocol assessing smCPET findings with perioperative outcomes and (2) affirmed satisfactory patient-centered outcomes with study procedures. Studies should further expand smCPET predictive validity to postoperative cardiopulmonary complications, assess cost-effectiveness, and develop optimal system processes for patient selection. Integration of brief smCPET in a high-volume PSE clinic was feasible as measured by the primary end points of session time, patient satisfaction with smCPET task execution, perceived exertion, and session scheduling. The operational efficiency of study team members was acceptable within 15 experimental sessions. Finally, smCPET measures of peak METs and VO 2 were significantly lower, when compared to comparable structured survey results. Mean session time, which included the subjective METs survey, DASI, and 6-minute smCPET session, was 16.9 (SD 6.8) minutes, with progressive improvement over the study time period as operators (n=4) became facile with the study instrument . It is important to note that smCPET comprised 6 minutes of the session time, shorter than reported times with conventional CPET (15-20 min/session) . In high-volume PSE, this may be advantageous, as patients are often seen on short notice for preoperative evaluation. Participants were able to flexibly arrange smCPET around other clinic appointments, decreasing study participants’ time constraints. This likely enhanced our high satisfaction score for scheduling. High patient satisfaction was observed with task execution and perceived exertion during smCPET. The tested device uses a stationary stairstep for graded exercise, which was frequently familiar to participants. The short duration of graded exercise (3 minutes) was not perceived by any participant as maximum exertion by the Borg survey, likely contributing to the high level of patient satisfaction. Second, the Borg score of <7 after smCPET suggests a reasonable probability of success when transitioning its use to patients with more severe comorbidities, or preoperative deconditioning. It is important to note that the ventilatory threshold, or anaerobic threshold, was not measurable in 50% of our cohort, suggesting that the brief graded exercise contributed to the reported exertion level and high participant satisfaction. One of the goals of smCPET is to make precise cardiopulmonary evaluation more widely available and patient centered, advantages that are acknowledged by its increasing adoption in the routine assessment of heart failure and pulmonary hypertension. Consistent with large-scale CPET application in cardiovascular clinical trials, smCPET did not result in findings of major or minor complications despite encouraging participants to safely provide their best effort within the timed and graded exercise component . This is reassuring, as early termination of preoperative CPET trials, due to participant fatigue, safety, or other considerations, has been reported to be approximately 11% . However, we purposefully selected functionally independent participants with self-reported ≥4.6 METs, and expansion to patients who are less functionally independent may result in higher smCPET session failure rates. Regardless, the safety of smCPET has been suggested by its routine application to high-risk and frail populations with severe cardiopulmonary disease, suggesting that a wide spectrum of preoperative populations can be safely tested using smCPET . The structured survey estimated METs were, on average, significantly higher than their smCPET equivalents. Using the subjective METs structured survey, 8 (19%) of 43 participants reported peak METs within 10% of smCPET extrapolated peak METs, 12 (28%) were underestimated by >10%, and 23 (53%) were overestimated by >10%, when compared to smCPET values. Brief smCPET identified that 8 (19%) out of 43 study participants had ≤4.6 extrapolated peak METs (peak VO 2 equivalent: 14 mL kg –1 min –1 ), corresponding to a METs threshold associated with higher perioperative cardiovascular risk . Furthermore, smCPET identified 9 (21%) out of 43 participants with an age-adjusted peak VO 2 of less than 20 mL kg –1 min –1 , corresponding to poor aerobic capacity, and 2 (5%) with an extrapolated peak VO 2 less than 15 mL kg –1 min –1 , a measure frequently associated with higher perioperative risk . These findings support prior descriptions of provider-driven and structured survey overestimation bias, highlighting the challenge of obtaining an accurate preoperative functional capacity assessment. Clinicians, when compared to CPET, had a 19.2% sensitivity in identifying low functional capacity (≤4 METs) . Other investigations have also observed that preanesthesia evaluation calculation of self-reported METs overestimate functional capacity when compared to CPET assessment . DASI was also found to poorly predict participants with lower peak VO 2 . In a cohort of participants that would not necessarily receive extensive preoperative assessment, given that 100% reported the ability to reliably climb 2 flights of stairs, this may suggest opportunities to identify and preemptively optimize unexpected cardiopulmonary impairments prior to surgical intervention. Worldwide, value-based health care has been a significant priority, and conventional preoperative evaluation may increase overall testing costs without improving perioperative outcomes . Implementing brief smCPET for individualized preoperative cardiovascular evaluation may improve the precision of preoperative cardiovascular risk assessment and may potentially curb excess preoperative cardiovascular testing commonly associated with older age and patients with higher comorbidities . However widespread adoption of this technology in the perioperative space will require (1) further evidence of smCPET predictive validity for perioperative outcomes, (2) characterization of optimal system processes for patient selection, and (3) justification of cost-benefit. Several study limitations limit generalizability to other populations. Selection bias should be acknowledged given that participants who volunteered for the study are likely to be more health-conscious than usual patients who undergo PSE. A measurement bias may be introduced into the study given that researchers may unconsciously influence participant performance on smCPET or interpret results differently based on unconscious expectations. Similarly, a recall bias is often introduced when using structured, interview-style questionnaires such as those used in our study. Instrument bias may similarly impact smCPET findings; however, this is substantially reduced by routine device calibration. Confounding factors are similar, where participants with higher fitness levels would find it easier to adapt to the stairstep exercise challenge. Our inclusion criteria purposely selected participants with lower comorbidities to ensure successful participation rates for this feasibility study. We acknowledge that certain premorbid conditions and chronic medication usage may influence smCPET participants’ performance, but we did not balance this factor in this exploratory study. Although CPET and smCPET predictive performance with cardiovascular perioperative morbidity and mortality has been previously published, our cohort is not yet powered for the assessment of perioperative outcomes with this device . Finally, the finding of no device-related adverse events should be cautiously interpreted given the small sample size and the possibility of rare exercise-induced adverse events. In summary, we observed that smCPET implementation was well accepted into the workflow of a high-volume PSE clinic. Operator efficiency with the smCPET instrument was rapid and achieved relative parity at 15 participant sessions. smCPET, when compared to usual session times for conventional CPET of 15-20 minutes, uses less than half the time (6 minutes), making it attractive for the purposes of precise but time-efficient preoperative evaluation of exercise tolerance. This feasibility analysis has (1) reinforced the operational integrity of our active study protocol assessing smCPET findings with perioperative outcomes and (2) affirmed satisfactory patient-centered outcomes with study procedures. Studies should further expand smCPET predictive validity to postoperative cardiopulmonary complications, assess cost-effectiveness, and develop optimal system processes for patient selection.
Deciphering clinical significance of BCL11A isoforms and protein expression roles in triple-negative breast cancer subtype
3951b9b9-9867-4a5b-bd49-498efb25b0de
10314865
Anatomy[mh]
Triple negative breast cancer, which accounts for 10–20% of all invasive breast cancer (BC) subtypes, is characterized by the lack of immunohistochemical expression of estrogen receptor (ER), progesterone receptor (PR), and HER2 and/or HER2 gene amplification. TNBC is most prevalent in women aged < 50 years and shows aggressive clinical behavior (i.e., high histological grade, significantly high metastatic rate and it is responsible for about 25% of BC related deaths) (Angius et al. ). Its heterogeneity can be associated with different clinical outcomes. A recent study evaluated the outcome of TNBC patients highlighting that an accurate and reliable histopathologic definition of TNBC subtypes has a significant clinical utility and is an effective tool during the therapeutic decision making process (Sanges et al. ). Using gene expression profiling, the molecular signature of TNBC divided the molecular subclassification into four groups: basal-like 1 and 2, mesenchymal, and luminal androgen receptor (LAR) (Lehmann et al. ). Gene expression profiling, morphological and immunohistochemical analysis of TNBC represent prognostic and therapeutic tools to customize therapy and improve patient outcomes. TNBC molecular biomarkers could predict the prognosis (Cagney et al. ). We demonstrated that modification of miR-135b might improve the outcome of TNBCs with basal-like features (Uva et al. ). The subclassification of patients in our TNBC cohort, based on the high proportion of genetic alterations involving PI3K/AKT pathways, provides evidence that specific genomic abnormalities can select patients who can benefit from targeted therapies (Cossu-Rocca et al. ). BCL11A was initially detected due to an aberrant chromosomal translocation t(2;14)(p13;q32.3) in human B-cell non-Hodgkin’s lymphomas (Nakamura et al. ). BCL11A gene is located on human chromosome 2p13 and is ~ 102 kb in length. BCL11A codes for a protein with an uncommon C2HC zinc finger at the N-terminus and six Krüppel-like C2H2 zinc fingers near the C terminus. Three main mRNA variants were found: BCL11A-XL, BCL11A-L and BCL11A-S, each contains differing numbers of C-terminal C2H2 finger motifs. All 3 isoforms contained the first 3 exons, and only the longest isoform expresses sequences from exons one to four (Satterwhite et al. ). BCL11A-XL protein isoform was expressed in brain and hematopoietic tissues (Liu et al. ). Also BCL11A-XL expressed in a range of tumor-derived cell lines (Pulford et al. ). Functional studies demonstrated that BCL11A-XL was a transcriptional repressor working in association with itself, other BCL11A isoforms, and with BCL6 gene. So BCL11A-XL might play an essential role in tumor development (Liu et al. ; Pulford et al. ). High level expression of BCL11A-S was observed in human Hodgkin’s lymphoma cell line [8]. BCL11A-L isoform was expressed preferentially in derived B -cell malignant cell lines (Satterwhite et al. ). Growing evidence demonstrated that BCL11A also plays an essential role in the pathogenesis of solid tumors, including prostate cancer, lung cancer, laryngeal squamous cell carcinoma and acute leukemia (Kapatai and Murray ; Chetaille et al. ; Boelens et al. ; Agueli et al. ; Jin et al. ; Podgornik et al. ). Khaled et al. determined that BCL11A acts as an oncogene in TNBC, and its overexpression is key for tumor formation and invasion. BCL11A supports the development of normal and malignant mammary epithelial stem/progenitor populations (Khaled et al. ). Furthermore, its silencing re duces tumor initiating cells population in TNBC xenograft model (Zhu et al. ). In the mouse mammary gland, BCL11A is part of a specific subsets of embryonic mammary genes, silenced in adult epithelia and reactivated in mouse and human basal-like breast cancer (Zvelebil et al. ). The aim of the present study was to assess the clinical role of BCL11A in the molecular TNBC subtype. A retrospective cohort of BC patients diagnosed between 2000 and 2015 was selected. Samples were obtained from the archives of the Department of Histopathology of the Oncology Hospital of Cagliari, Italy. Inclusion criteria were complete review of surgical specimens and medical records and availability of formalin-fixed, paraffin-embedded (FFPE) tumor blocks from surgical specimens. Three experienced pathologists independently reviewed all cases. Histologic subtyping was performed according to current WHO classification (Rakha et al. ). Three µm thick tissue sections of FFPE specimens were cut for hematoxylin and eosin staining, IHC, in situ hybridization (SISH) and genetic analysis. The study protocol was approved by the Azienda Sanitaria Locale Sassari Bioethics Committee (n. 1140/L, 05/21/2013); and followed the Italian law on guidelines for the implementation of retrospective observational studies (G.U. n. 76, 31 March 2008). Only coded data were collected to protect patient confidentiality. Immunohistochemistry ER, PR, HER2 and Ki-67 immunohistochemical expression and/or HER2 gene amplification, as defined by silver enhanced SISH, established the surrogate intrinsic subtypes of BC, based on the St. Gallen Consensus 2013 (Goldhirsch et al. ). AR Clone SP107 (Cell-MarqueTM, Rocklin, CA, USA) was used to determine AR expression. IHC and SISH analysis were performed as previously described (Orrù et al. ). BCL11A clone 14B 5 (dilution 1:100, ab19487, AbCam, Cambridge, USA) was used to determine BCL11A expression. The ab19487 antibody, whose epitope is in core of amino acids 172–434, can identify the BCL11A-XL and BCL11A-L isoforms. BCL11A immunostaining was performed using the Ventana Benchmark XT staining system with an Optiview DAB detection kit. IHC analysis was performed on 87 BC and 12 normal breast tissue (NBT) FFPE block samples. Also, 343 TNBC tissue microarrays (TMAs) were used. Evaluation of immunohistochemical staining ER and PR expression were positive if at least 1% immunostained tumor nuclei were detected in the sample, according to the American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) recommendations for immunohistochemical testing of hormone receptors in BC (Hammond et al. ), whose criteria have recently been adopted by WHO classification (Rakha et al. ). The Ki67 cut-offs < 14, 15–35% and > 35% were based on results previously obtained (Urru et al. ); AR expression was considered positive if at least 10% immunostained tumor nuclei were detected in the sample (Park et al. ). All IHC expressions were categorized using a semi-quantitative method. Based on IHC approach the following BC surrogate intrinsic subtypes were found: nine Luminal A [ER and PR expression positive, with PR cut point of ≥ 20%, HER2 negative and Ki-67 low (< 14%)]; nine Luminal B [ER expression positive, PR expression negative or low, HER2 expression negative and Ki-67 high (> 14%), or ER expression positive, HER2 protein positive or HER2 gene amplified, any PR and any Ki-67]; eight HER2-enriched [ER and PR expression negative, HER2 protein positive or HER2 gene amplified]; sixty-one TNBC [ER, PR and HER2 expression negative or HER2 gene not amplified]. The ordinal Allred scoring system was used to assess BCL11A immunostaining quantity in tumor cells, based on intensity (0, negative; 1 + , weak; 2 + , moderate; 3 + , strong) and percentage of stained cells (0 = 0%, 1 =  < 1%, 2 = 1–10%, 3 = 11–33%, 4 = 34–66% and 5 =  > 66%); the combination of intensity + percentage gives an Allred score between 0 and 8. Tumor with Allred score > 2 was defined as positive for BCL11A expression (Khaled et al. ). Acid nucleic extraction Genomic DNA was obtained from neoplastic tissue, and total RNA was obtained from neoplastic and non-neoplastic specimens. Nucleic acids were extracted using the QIAmp DNA Mini Kit and miRNeasy Mini Kit (Qiagen, Hilden, Germany). The quantity and the quality of nucleic acids were assessed using Nanodrop ND1000 (Euro-Clone, Milan, Italy). The RNA quantity was evaluated by Qubit ® RNA BR Assay Kit (ThermoFisher Scientific, Waltham, USA). The RNA integrity was assessed by the RNA Integrity Number (RIN) using the Agilent RNA 6000 Nano Kit on the BioAnalyzer 2100 (Agilent, Santa Clara, USA). Quantitative real time PCR Gene expression profiles of BCL11A were analyzed in all BC molecular intrinsic subtypes. Two µg of total RNA were reverse transcribed to cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystem, Foster City, CA, USA). BCL11A encodes three mRNA variants and each isoform of BCL11A has specific expression patterns. Primers for BCL11A (Hs01076078_m1, 60 bp), the isoforms BCL11A-S (Hs01093198_m1), BCL11A-L (Hs01093199-m1), BCL11A-XL (Hs00250581_s1) and 18S rRNA (Hs99999901_S1, 187 bp) human genes were chosen using Assays-on-Demand™-Products (Applied Biosystems). Neoplastic and non-neoplastic tissues were analyzed by quantitative real time PCR (qRT-PCR) using the ABI 7900HT Sequence Detection System (Applied Biosystems) (Cossu-Rocca et al. ). The relative mRNA expression level was analyzed according to the Applied Biosystem User Bulletin N°2. The calculation 2-ΔΔCt (Fold Change, FC) was chosen to represent the level of expression, with a FC > 2 being considered as overexpression. Mutation analysis BCL11A gene mutation analysis was performed on exon 4 encoding five of the six Kruppel-like zinc-finger domains (C2H2) of the BCL11A-XL protein, where several most common missense mutations were identified in patients affected by autism, intelligence disabilities (Cai et al. ), and ovarian cancer (Er et al. ): the exon 4 contains almost all the BCL11A single nucleotide polymorphisms. Amplification of the exon 4 and Sanger sequencing analysis were performed in all BC molecular subtypes analyzed for gene expression profile, using the following sequence primers: BCL11A_ex4_F2:5ʹ-ACCGCATAGACGATGGCAC-3ʹ and BCL11A_ex4_R2:5ʹ-CCCCGAGATCCCTCCGT-3ʹ (De Miglio et al. ). Statistical analysis An ad hoc electronic form was created to collect qualitative and quantitative variables. Qualitative data were summarized with absolute and relative (percentages) frequencies. Chi-squared or Fisher exact tests were used to detect any statistical differences in the comparison of qualitative variables between down and up regulation of BCL11A gene or low and high protein expression. Logistic regression analysis was performed to assess the relationship between BCL11A upregulation or high protein expression and clinicopathological TNBC characteristics. Survival rate differences between down and upregulation or low and high protein expression were detected with Kaplan–Meier analysis. P-value less than 0.05 was considered statistically significant. Stata 17 (StataCorp, TX) statistical software was used for every statistical computation. ER, PR, HER2 and Ki-67 immunohistochemical expression and/or HER2 gene amplification, as defined by silver enhanced SISH, established the surrogate intrinsic subtypes of BC, based on the St. Gallen Consensus 2013 (Goldhirsch et al. ). AR Clone SP107 (Cell-MarqueTM, Rocklin, CA, USA) was used to determine AR expression. IHC and SISH analysis were performed as previously described (Orrù et al. ). BCL11A clone 14B 5 (dilution 1:100, ab19487, AbCam, Cambridge, USA) was used to determine BCL11A expression. The ab19487 antibody, whose epitope is in core of amino acids 172–434, can identify the BCL11A-XL and BCL11A-L isoforms. BCL11A immunostaining was performed using the Ventana Benchmark XT staining system with an Optiview DAB detection kit. IHC analysis was performed on 87 BC and 12 normal breast tissue (NBT) FFPE block samples. Also, 343 TNBC tissue microarrays (TMAs) were used. ER and PR expression were positive if at least 1% immunostained tumor nuclei were detected in the sample, according to the American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) recommendations for immunohistochemical testing of hormone receptors in BC (Hammond et al. ), whose criteria have recently been adopted by WHO classification (Rakha et al. ). The Ki67 cut-offs < 14, 15–35% and > 35% were based on results previously obtained (Urru et al. ); AR expression was considered positive if at least 10% immunostained tumor nuclei were detected in the sample (Park et al. ). All IHC expressions were categorized using a semi-quantitative method. Based on IHC approach the following BC surrogate intrinsic subtypes were found: nine Luminal A [ER and PR expression positive, with PR cut point of ≥ 20%, HER2 negative and Ki-67 low (< 14%)]; nine Luminal B [ER expression positive, PR expression negative or low, HER2 expression negative and Ki-67 high (> 14%), or ER expression positive, HER2 protein positive or HER2 gene amplified, any PR and any Ki-67]; eight HER2-enriched [ER and PR expression negative, HER2 protein positive or HER2 gene amplified]; sixty-one TNBC [ER, PR and HER2 expression negative or HER2 gene not amplified]. The ordinal Allred scoring system was used to assess BCL11A immunostaining quantity in tumor cells, based on intensity (0, negative; 1 + , weak; 2 + , moderate; 3 + , strong) and percentage of stained cells (0 = 0%, 1 =  < 1%, 2 = 1–10%, 3 = 11–33%, 4 = 34–66% and 5 =  > 66%); the combination of intensity + percentage gives an Allred score between 0 and 8. Tumor with Allred score > 2 was defined as positive for BCL11A expression (Khaled et al. ). Genomic DNA was obtained from neoplastic tissue, and total RNA was obtained from neoplastic and non-neoplastic specimens. Nucleic acids were extracted using the QIAmp DNA Mini Kit and miRNeasy Mini Kit (Qiagen, Hilden, Germany). The quantity and the quality of nucleic acids were assessed using Nanodrop ND1000 (Euro-Clone, Milan, Italy). The RNA quantity was evaluated by Qubit ® RNA BR Assay Kit (ThermoFisher Scientific, Waltham, USA). The RNA integrity was assessed by the RNA Integrity Number (RIN) using the Agilent RNA 6000 Nano Kit on the BioAnalyzer 2100 (Agilent, Santa Clara, USA). Gene expression profiles of BCL11A were analyzed in all BC molecular intrinsic subtypes. Two µg of total RNA were reverse transcribed to cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystem, Foster City, CA, USA). BCL11A encodes three mRNA variants and each isoform of BCL11A has specific expression patterns. Primers for BCL11A (Hs01076078_m1, 60 bp), the isoforms BCL11A-S (Hs01093198_m1), BCL11A-L (Hs01093199-m1), BCL11A-XL (Hs00250581_s1) and 18S rRNA (Hs99999901_S1, 187 bp) human genes were chosen using Assays-on-Demand™-Products (Applied Biosystems). Neoplastic and non-neoplastic tissues were analyzed by quantitative real time PCR (qRT-PCR) using the ABI 7900HT Sequence Detection System (Applied Biosystems) (Cossu-Rocca et al. ). The relative mRNA expression level was analyzed according to the Applied Biosystem User Bulletin N°2. The calculation 2-ΔΔCt (Fold Change, FC) was chosen to represent the level of expression, with a FC > 2 being considered as overexpression. BCL11A gene mutation analysis was performed on exon 4 encoding five of the six Kruppel-like zinc-finger domains (C2H2) of the BCL11A-XL protein, where several most common missense mutations were identified in patients affected by autism, intelligence disabilities (Cai et al. ), and ovarian cancer (Er et al. ): the exon 4 contains almost all the BCL11A single nucleotide polymorphisms. Amplification of the exon 4 and Sanger sequencing analysis were performed in all BC molecular subtypes analyzed for gene expression profile, using the following sequence primers: BCL11A_ex4_F2:5ʹ-ACCGCATAGACGATGGCAC-3ʹ and BCL11A_ex4_R2:5ʹ-CCCCGAGATCCCTCCGT-3ʹ (De Miglio et al. ). An ad hoc electronic form was created to collect qualitative and quantitative variables. Qualitative data were summarized with absolute and relative (percentages) frequencies. Chi-squared or Fisher exact tests were used to detect any statistical differences in the comparison of qualitative variables between down and up regulation of BCL11A gene or low and high protein expression. Logistic regression analysis was performed to assess the relationship between BCL11A upregulation or high protein expression and clinicopathological TNBC characteristics. Survival rate differences between down and upregulation or low and high protein expression were detected with Kaplan–Meier analysis. P-value less than 0.05 was considered statistically significant. Stata 17 (StataCorp, TX) statistical software was used for every statistical computation. BCL11A expression in molecular intrinsic subtypes of breast cancer Eighty-seven primary BC, comprising all molecular subtypes, were analyzed by gene expression profiling by qRT-PCR. The overall high expression of BCL11A and each of its transcripts (BCL11A-XL, BCL11A-L and BCL11A-S) significantly correlated with TNBC pathology ( P < 0.05) (Fig. A). We found a significant BCL11A overexpression in TNBC compared to Luminal A (P: 0.004) and B (P: 0.002) while a significant BCL11A downregulation was present in Luminal A and B compared to NBT (P: 0.002 and P < 0.001, respectively). No significant differences were shown between HER2-enriched and other molecular intrinsic subtypes and NBT. BCL11A-XL was overexpressed in TNBC vs Luminal A and B (P: 0.012 and P: 0.040, respectively), whereas BCL11A-L and BCL11A-S were overexpressed in TNBC vs Luminal B (P: 0.003 and P: 0.011, respectively) (Fig. B). Focusing on BCL11A protein expression profile we performed IHC on the 87 primary BC molecular subtypes used for gene expression profile. A BCL11A protein overexpression was found in 16 out of 61 (26.2%) TNBCs, in 4 out of 12 (33.3%) NBT, whereas no protein expression was detected in Luminal A (nine cases), Luminal B (nine cases) and HER2-enriched (eight cases) tumors. The tumors immunostained positively showed high mRNA levels compared with those with negative immunostaining. Immunohistochemistry analysis of BCL11A in an independent validation cohort of 343 TNBC samples, confirmed that BCL11A protein expression agreed with the first cohort examined: 79 BCL11A-overexpressing TNBCs out of 343 (23.0%). Figure showed a representative BCL11A protein expression of BC molecular intrinsic subtypes. BCL11A expression profile and association with TNBC clinic-pathological data Table showed the clinic-pathological features of the 61 TNBC patients included in the expression profile analysis. The median (interval quartile range, IQR) age at diagnosis was 57 (31–84) years, with 39 (63.9%) older than 50 years. Forty (65.6%) tumors were ductal, 9 (14.8%) medullary, 4 (6.6%) metaplastic. Tumor staging was pT1 in 24 (42.9%) cases, pT2 in 26 (46.4%), pT3 in 3 cases (5.4%), pT4 in 3 cases (5.4%). Lymph node status was divided into 31 pN0 (53.5%), 16 pN1 (27.6%), 6 pN2 (10.3%) and 5 pN3 (8.6%). Moreover, 24.1% of tumors were stage I, 53.7% stage II, and 22.2% stage III; 4.9% of TNBCs were G1, 13.1% G2, and 82.0% G3. Ki-67 expression was > 20% in 80.3% of TNBCs. Necrosis was present in 35.1%. Tumor infiltrating lymphocyte (TIL) and lymphovascular invasion (LVI) were detected in 52.9 and 25.5%, respectively. AR expression was found in 30.9% cases. A total of 8 patients out of 61 (13.1%) died. The clinicopathological data of the validation cohort is reported in Table S1. TNBCs with BCL11A and BCL11A-L mRNA overexpression were more frequently associated with AR expression < 10% ( P : 0.05). BCL11A-L mRNA overexpression was associated with some histological types such as medullary and metaplastic carcinomas (P: 0.04) (Table ). BCL11A protein expression was associated with ki-67 > 35% (P: 0.004), and with absence of LIV and AR downregulation (P: 0.03 and P: 0.02, respectively) (Table ). BCL11A expression profile and association with TNBC survival Logistic regression analysis revealed that histological type (HR, 0.2; 95% CI 0.1–0.8; P: 0.02) and AR expression (HR, 0.2; 95% CI 0.0–1.0; P: 0.05) are independent prognostic factors for overall survival (OS) in BCL11A-L transcripts overexpressing TNBCs. High protein expression levels of BCL11A (HR, 17.1; 95% CI 4.0–72.2; P < 0.001) are independent prognostic factors for TNBCs overexpressing mRNA BCL11A or its isoforms (Tables ). LIV (HR, 0.52; 95% CI 0.29–0.92; P: 0.03) and AR (HR, 0.37; 95% CI 0.16–0.88; P: 0.02) are independent prognostic factors for TNBCs showing high BCL11A protein expression levels (Table ). Kaplan–Meier curve for OS showed no differences among TNBCs with overexpression of BCL11A transcripts and its isoforms in comparison with those downregulated. We observed the same trend for TNBCs with high protein expression levels, analyzing the entire cohort of tumors included in the study (Fig. ). BCL11A mutational analysis in molecular intrinsic subtypes of breast cancer Sequencing of BCL11A exons 4 did not find any genomic variation in our BC molecular cohort, expect the rs7569946. This synonymous substitution C vs T (Phe699Phe), was detected in all BC molecular subtypes. CC genotype was prevalent in all BC molecular subtypes (60–62.5%). In TNBC subtype, no TT homozygous were present while 40% of them showed CT genotype. Eighty-seven primary BC, comprising all molecular subtypes, were analyzed by gene expression profiling by qRT-PCR. The overall high expression of BCL11A and each of its transcripts (BCL11A-XL, BCL11A-L and BCL11A-S) significantly correlated with TNBC pathology ( P < 0.05) (Fig. A). We found a significant BCL11A overexpression in TNBC compared to Luminal A (P: 0.004) and B (P: 0.002) while a significant BCL11A downregulation was present in Luminal A and B compared to NBT (P: 0.002 and P < 0.001, respectively). No significant differences were shown between HER2-enriched and other molecular intrinsic subtypes and NBT. BCL11A-XL was overexpressed in TNBC vs Luminal A and B (P: 0.012 and P: 0.040, respectively), whereas BCL11A-L and BCL11A-S were overexpressed in TNBC vs Luminal B (P: 0.003 and P: 0.011, respectively) (Fig. B). Focusing on BCL11A protein expression profile we performed IHC on the 87 primary BC molecular subtypes used for gene expression profile. A BCL11A protein overexpression was found in 16 out of 61 (26.2%) TNBCs, in 4 out of 12 (33.3%) NBT, whereas no protein expression was detected in Luminal A (nine cases), Luminal B (nine cases) and HER2-enriched (eight cases) tumors. The tumors immunostained positively showed high mRNA levels compared with those with negative immunostaining. Immunohistochemistry analysis of BCL11A in an independent validation cohort of 343 TNBC samples, confirmed that BCL11A protein expression agreed with the first cohort examined: 79 BCL11A-overexpressing TNBCs out of 343 (23.0%). Figure showed a representative BCL11A protein expression of BC molecular intrinsic subtypes. Table showed the clinic-pathological features of the 61 TNBC patients included in the expression profile analysis. The median (interval quartile range, IQR) age at diagnosis was 57 (31–84) years, with 39 (63.9%) older than 50 years. Forty (65.6%) tumors were ductal, 9 (14.8%) medullary, 4 (6.6%) metaplastic. Tumor staging was pT1 in 24 (42.9%) cases, pT2 in 26 (46.4%), pT3 in 3 cases (5.4%), pT4 in 3 cases (5.4%). Lymph node status was divided into 31 pN0 (53.5%), 16 pN1 (27.6%), 6 pN2 (10.3%) and 5 pN3 (8.6%). Moreover, 24.1% of tumors were stage I, 53.7% stage II, and 22.2% stage III; 4.9% of TNBCs were G1, 13.1% G2, and 82.0% G3. Ki-67 expression was > 20% in 80.3% of TNBCs. Necrosis was present in 35.1%. Tumor infiltrating lymphocyte (TIL) and lymphovascular invasion (LVI) were detected in 52.9 and 25.5%, respectively. AR expression was found in 30.9% cases. A total of 8 patients out of 61 (13.1%) died. The clinicopathological data of the validation cohort is reported in Table S1. TNBCs with BCL11A and BCL11A-L mRNA overexpression were more frequently associated with AR expression < 10% ( P : 0.05). BCL11A-L mRNA overexpression was associated with some histological types such as medullary and metaplastic carcinomas (P: 0.04) (Table ). BCL11A protein expression was associated with ki-67 > 35% (P: 0.004), and with absence of LIV and AR downregulation (P: 0.03 and P: 0.02, respectively) (Table ). Logistic regression analysis revealed that histological type (HR, 0.2; 95% CI 0.1–0.8; P: 0.02) and AR expression (HR, 0.2; 95% CI 0.0–1.0; P: 0.05) are independent prognostic factors for overall survival (OS) in BCL11A-L transcripts overexpressing TNBCs. High protein expression levels of BCL11A (HR, 17.1; 95% CI 4.0–72.2; P < 0.001) are independent prognostic factors for TNBCs overexpressing mRNA BCL11A or its isoforms (Tables ). LIV (HR, 0.52; 95% CI 0.29–0.92; P: 0.03) and AR (HR, 0.37; 95% CI 0.16–0.88; P: 0.02) are independent prognostic factors for TNBCs showing high BCL11A protein expression levels (Table ). Kaplan–Meier curve for OS showed no differences among TNBCs with overexpression of BCL11A transcripts and its isoforms in comparison with those downregulated. We observed the same trend for TNBCs with high protein expression levels, analyzing the entire cohort of tumors included in the study (Fig. ). Sequencing of BCL11A exons 4 did not find any genomic variation in our BC molecular cohort, expect the rs7569946. This synonymous substitution C vs T (Phe699Phe), was detected in all BC molecular subtypes. CC genotype was prevalent in all BC molecular subtypes (60–62.5%). In TNBC subtype, no TT homozygous were present while 40% of them showed CT genotype. BCL11A is a proto-oncogene which maps on chromosome 2p16. Alternative splicing generates at least three most common BCL11A transcripts, BCL11A-XL, BCL11A-L and BCL11A-S containing differing numbers of C-terminal C2H2 finger motifs, and showing low expression in normal human tissue, except in fetal liver, hematopoietic tissue and brain (Yin et al. ). The BCL11A-XL mRNA is the prevalent transcript (Satterwhite et al. ). BCL11A acts as a transcription repressor directly binding to its DNA target sequence, 5ʹ-GGCCGG-3ʹ (Avram et al. ) and/or indirectly interacting with and repressing other sequence specific transcription factors, such as COUP-TFs (Avram et al. ). BCL11A is an oncogene of different malignant hematological diseases (Weniger et al. ; Nakamura et al. ). Recently, the pathogenetic role of BCL11A was also highlighted in solid tumors (e.g., lung, prostate, breast cancer, endometrial carcinoma, laryngeal squamous carcinoma) (Zhang et al. , ; Jiang et al. ; Khaled et al. ; Zhou et al. ; Chen et al. ; Wang et al. ). In our study, BCL11A was significantly overexpressed in TNBC both at transcriptional and translational levels compared to the other BC molecular subtypes. Gene expression profiling showed that high expression levels of BCL11A and its isoforms (BCL11A-XL, BCL11A-L and BCL11A-S) significantly correlated with TNBC pathology. Additionally, tumors positively immunostained showed high BCL11A mRNA levels compared with those with negative immunostaining. Our results confirmed recent data correlating BCL11A overexpression and TNBC subtype (Khaled et al. ). We found BCL11A protein expression in 26% of TNBCs in our cohort, likewise to the 29.6% reported by Chen et al. (Chen et al. ), in contrast with Khaled et al. (67% of BCL11A expression in TNBC with basal-like features) (Khaled et al. ) and Wang et al. (100% of BCL11A expression in TNBC using a different score to define BCL11A overexpression) (Wang et al. ). The lower percentage of BCL11A protein expression detected in our cohort could depend on several factors: the definition of BCL11A expression by several operators, the cut-off values used, or the analysis performed on all TNBCs despite classification into molecular sub-classes. Regarding the prognostic significance, we showed that BCL11A protein expression acts as an unfavorable prognostic factor in TNBC patients. Metaplastic and medullary histotypes, absence of LIV and AR downregulation can be considered prognostic factors in patients with BCL11A overexpressing TNBC. Moreover, BCL11A overexpressing TNBCs were associated with a higher proliferation index (> 35%). Among TNBC histotypes, the medullary type of pattern is often associated with variable immunohistochemical expression of basal markers (Rakha et al. ). Our previous findings confirmed that medullary and metaplastic carcinomas exhibit higher grades (G3) and higher proliferation index (Ki67 > 30%), while LVI was detected in only 7.4% of medullary carcinomas. Metaplastic carcinoma had poor 5 and 10 year survival in comparison with other histologic types (Sanges et al. ). We found a negative relationship between LVI and BCL11A expression, in contrast with previous results that gave no significant differences (Shen et al. ). However, Ugras et al. demonstrated that LVI and nodal metastases were less frequent in TNBC vs other BC subtypes (Ugras et al. ). Based on previous findings we could speculate that in BCL11A overexpressing TNBC the worse prognosis is not related to LVI rate. Our data showed an inverse association between BCL11A overexpression and AR expression levels in TNBCs. Considering that patients with LAR TNBC showed the best OS compared to the other TNBCs subtypes (Masuda et al. ), our results might suggest that BCL11A can be a biomarker for more aggressive non luminal TNBCs subgroups. Choi et al. findings could support previous hypothesis, showing that the inhibition of BCL11A and HDAC1/2 effectively reprogramming basal like cancer cells into luminal A cells, increasing ER expression and leading to tamoxifen sensitivity (Choi et al. ). In contrast with our results, Wang et al. identified a positive correlation between AR and BCL11A expression by analyzing all BC molecular subtypes (Wang et al. ). Our survival analysis did not show any relationship between BCL11A gene and/or protein expression and patient outcomes. Khaled et al. demonstrated that patients with copy number (CN) gains of BCL11A had a higher rate of relapse and metastasis and a lower rate of survival (Khaled et al. ). The differences could be related to the selection of TNBC with basal like phenotype included in the Khaled’s study, compared to our study in which all TNBC phenotypes, included LAR, were all considered. No nucleotide variants were found in BCL11A exon 4. The literature data demonstrates the presence of different genomic alterations for this gene in malignant diseases, as well as CV amplification, epigenetic deregulation, translocation or abnormal activation upon viral integration (Boelens et al. ; Jiang et al. ; Yin et al. ). We recognize that our study does have some limitations mainly related to its retrospective nature: key clinical follow-up data were unfortunately not found in medical records. Our study highlights the role of BCL11A and its correlation with clinicopathological features of TNBC. BCL11A expression seems to be a poor prognostic factor in TNBC patients. BCL11A may become a prognostic factor for more aggressive non luminal TNBCs subgroups, with the worse prognosis of BCL11A overexpressing TNBC not related to LVI. Furthermore, BCL11A was overexpressed in more aggressive histologic types, such as metaplastic and medullary carcinomas. These results may provide a new paradigm for TNBC classification and a better treatment strategy. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 18 KB)
Real-time extended psychophysiological analysis of financial risk processing
81535277-4d2d-46ac-b6f9-d57c6b72eb6d
9312384
Physiology[mh]
Several experimental studies in the field of behavioral economics have documented that emotional states (for example, fear, excitement, and stress) may influence the decision making of an agent . In financial markets, sudden fluctuations in prices, changes in investor holdings, or other factors unknown to the agent may impact decisions. To better understand the biological and psychological mechanism of financial decision making under uncertainty, the emerging field of neurofinance has incorporated insights from psychology and neuroscience into theories of finance . In previous studies, researchers characterized the psychophysiological (PP) activities of traders using functional magnetic resonance imaging (fMRI), hormonal level measurement, and physiological signals like heart rate, blood pressure, and skin temperature (reviewed in Section Literature Review ). However, these techniques share the common limitation that, due to the equipment used to measure the physiological signals, the experiments are either conducted in laboratory settings rather than in the field, or the data collection procedure interferes with the day-to-day activities of traders. While controlled experimentation yields cleaner inferences, their findings lack the direct applicability to real financial settings which in situ experiments provide. In this work, we conducted an in situ experiment and collected the physiological signals of 55 professional financial traders at a global financial institution during the entire trading day over a five-day period for each trader. Using an Empatica E4 wristband, we capture a variety of real-time physiological signals, such as heart rate, blood volume pulse, inter-beat interval, electrodermal activity, skin temperature, and three-dimensional accelerations of wrist motion, without interfering with the traders’ daily routines on their trading desks. This non-invasive experimental design offers the most realistic setting to date for analyzing the traders’ real-time PP activities during financial decision making. The primary focus of our analysis was to measure the relation (if any) between a trader’s PP state and their financial decisions or market events, both contemporaneously and temporally. Since a trader’s PP state is not directly observed and must be inferred from the physiological signals, we measure each trader’s “psychophysiological (PP) activation”, defined as the collective deviation of physiological signals from their baseline values. Higher levels of PP activation indicates emotions such as excitement, stress, agitation, arousal, and irritation which elevate the physiological activities from the normal state. By analyzing the changes in the trader’s PP activation levels, we identify the circumstances in which such changes coincided with, or were caused by, financial transactions or market events. The key challenge was to accurately measure the levels of PP activation, and a major contribution of our study is the novel application of the Mahalanobis distance on features extracted from physiological signals as a quantitative metric of PP activation. There are two underlying assumptions of our analysis. First, financial decision making and risk processing under uncertainty trigger PP activation in professional traders. Second, the levels of the trader’s PP activation can be explained by multiple factors such as market fluctuations, the types of financial products traded, as well as the trading experience of each trader. We test four hypotheses related to these two assumptions: Traders monitor multiple market indices during working hours. Fluctuations in these market indices often represent trading signals to the traders and may affect the existing positions and real-time profit and loss (PnL) of the traders. We test the hypothesis that market fluctuations have causal influence on the PP activation of the traders. Traders who participated in our study have different lengths of trading experience ranging from 10 months to 25 years. As traders gain more trading experience, they may develop skills to manage their PP activation level when making risky financial decisions. Since we do not observe these skills, we hypothesize that the ability to manage one’s activation level is positively correlated with the length of one’s trading experience and test the statistical relationship between the trading experience and activation levels of the traders. Traders who participated in our study worked in different business divisions and traded different financial products. Since different financial products have different market volatility, dollar volume of transactions, and trading frequency, we expect the traders to have different patterns of PP activation depending on the type of their business division. We test the hypothesis that the traders’ PP activation is related to their business divisions or the financial products they trade. As traders engage in risky financial transactions, their PP activation levels may change due to the financial decision making process before, during, and after the transaction. The PP activation levels may also depend on other factors such as the number of transactions and the average dollar volume. We examine the change in traders’ PP activation before and after making a transaction and identify the period when the traders are the most highly activated. We test the hypothesis that the traders’ PP activation levels are related to their trading frequency and the average dollar volume of their transactions. We found statistically significant relationships between the levels of PP activation and financial market movements, and that different traders monitored different market signals which, in turn, had causal influence on their PP activation levels. We found that traders with more trading experience had lower levels of activation. We also found a significant relationship between a trader’s activation level and the type of financial products being traded. For example, traders specializing in G10 rates, equities, commodity and foreign exchange had higher level of PP activation, while traders in securitized market generally had lower activation. Differences in the properties of the financial products lead to differences in traders’ decision making process, and hence to their PP activation levels. We also found that traders on average have the highest activation levels 15 to 25 minutes after making the transaction. We conducted follow-up interviews with 14 of the 55 traders to review their individual results, and received useful feedback which helps us interpret their PP activation patterns in the context of their specific working environments. The interviews yielded additional factors which influence traders’ PP activation, such as the demand of the workday, social events, and their managerial responsibilities. We found that busier workdays and social events led to higher PP activation in some traders. A major challenge in conducting neurofinance field studies is the faithful measurement of the trader’s real-time neural activities during live trading over extended periods of time . Most previous studies in the literature encountered an important tradeoff between measurement and environment, since measuring traders’ neural activities in real time required a controlled experimental setting. As a result, these studies typically used mock trading and simulated market events to replace the conditions of live trading. Many studies used fMRI to continuously monitor participants’ neural activity during simulated trading exercises. Another study designed simulated lottery games to study the effect of elevated cortisol levels on the degree of risk aversion of the participants. While the controlled experimental setting provides a rigorous framework for data collection and analysis, several issues prevent one from applying the methodology and conclusions of these studies to professional traders engaging in real-world trading activities. First, it is difficult to perform neural activity measurements during live trading seamlessly without interfering in the normal work routine of the trader. The act of making the measurements may have a lasting impact on the participant’s mental state, which is difficult to calibrate and remove from the analysis. In addition, the neural activity profile of a participant (who may not be a professional trader) during mock trading is likely to be different from professional traders during live trading, since in the latter situation the traders incur real profit and loss (PnL) to their companies as a consequence of their decisions. Performing physiological measurements during live trading sessions often required non-disruptive measurement schemes at discrete points in time during or after trading hours rather than real-time monitoring. Lo et al. used daily emotional-state surveys, and found that traders who react more intensely to their monetary gains and losses exhibited significantly worse PnL. Numerous studies used saliva samples to measure traders’ intra-daily cortisol and testosterone levels and studied the relation between traders’ hormone levels and their trading performance measured by PnL. Although saliva sampling may faithfully reflect a trader’s PP state during live trading without causing much disruption to their normal work routine, it is difficult to perform such measurements continuously in time, and the studies cited above collected saliva samples twice or three times each day. These measurement schemes cannot effectively capture a trader’s real-time PP responses to transient market events and fluctuations in asset prices and market indices. Lo and Repin were the first to measure traders’ real-time physiology during live trading sessions. They also collected financial market data relevant to the traders’ financial decisions synchronously during the physiology measurements. The time series data allowed them to identify PP responses to transient market events and periods of high market volatility, though their study involved relatively few subjects (10 traders) and short periods of measurement (ranging from 49 to 83 minutes) when the physiological signals were collected from each trader. In this study, we extended the line of research of and collected physiological signals of 55 professional traders at a global financial institution. Data collection for each trader lasted five consecutive workdays over a week, and six to eight hours each day. The larger sample size and longer measurement periods allowed us to study the PP behavior of traders in a more unconstrained and natural setting. We used the Empatica E4 wristband to collect high-frequency physiology time series data without disrupting the traders’ normal work routine. We also collected relevant market data and financial transaction data of individual traders to analyze their PP responses to market fluctuations and financial transactions. Another contribution of this study is our novel method for measuring the PP activation from physiological signals to capture psychological states such as excitement, stress and irritation. Higher activation is known to trigger psychological, behavioral, and physiological changes in the brain . These changes influence different biological signals, such as electrocardiogram readings, blood volume pulse, electrodermal activity, heart rate variability, and skin temperature . Since stress is an important cause of PP activation, the rich literature of stress detection from physiological signals may be applied to measure PP activation. Al-Shargie et al. measured electroencephalography and functional near-infrared spectroscopy on the prefrontal cortex for stress assessment. Several studies used features extracted from electrodermal activity to train stress detection machine learning models, while others analyzed heart rate variability for stress detection. In addition, photoplethysmography and skin temperature are also used for stress detection. Fernandez et al. proposed a system which uses commercial devices to detect stress and alert in traders in real time, which is of direct interest to our purpose. Previous experimental studies generally used artificial methods to elevate the activation levels in the subjects, and measured their biochemical markers or physiological responses synchronously . This “ground truth” information of the activation patterns makes it possible to train machine learning models for PP activation analysis. However, such “ground truth” information is not available in our study since we are agnostic of the factors which elevate the traders’ activation. Instead, we propose a novel metric for PP activation using the Mahalanobis distance of features extracted from physiological signals, which to the best of our knowledge has not been previously explored. This metric allows us to rigorously test the statistical relationship between PP activation and various aspects of financial decision making and risk processing, discussed in the next section. To rigorously analyze how financial risk processing affects a trader’s PP state, we collected both financial transaction records and the real-time physiological signals (Table A in ) of 55 professional day traders at a global financial institution during their normal trading activities. Before data collection, we collected the demographic information of the traders such as gender, trading experience, and business division via surveys. To understand the impact of market fluctuations on the trader’s PP activation, we also acquired 10 intraday financial market time series (Table B in ) commonly monitored by the traders during the same period as we measured their physiology. The details of data collection and preprocessing procedures are described in Sections A to F in . Measuring PP activation From the physiological signals of the traders, we constructed a synthetic metric to characterize each trader’s PP activation, defined as the physiological response to emotions like excitement, stress, and irritation. We chose to measure PP activation since it is ubiquitous in the context of financial risk processing. Events such as high market volatility, sudden gain or loss in the trader’s portfolio, or receiving instructions from clients to place transactions may induce elevated activation in traders. Previous studies showed that a high level of activation is associated with reduced performance . In addition, prolonged periods of high activation have an adverse impact on health . We extract PP activation from the physiological features in the following way: Heart Rate Variability (HRV): HRV measures the variations in the beat-to-beat intervals. Inspired by , we calculated four features from the raw HRV signal to measure activation: mHR, the average number of beats per minute; mRRi, the average value of the interbeat (RR) interval; SDNN, the standard deviation of the RR interval; and RMSSD, the root mean square sum of the successive interbeat interval difference. All features are calculated as time series using a 5-minute trailing window, similar to . Electrodermal Activity (EDA): EDA measures the electrical conductivity of the skin. Elevated activation increases the skin temperature and sweating, which in turn changes the EDA. We calculated three features from the raw EDA signal to measure activation: mAmp, the average of the raw skin conductance signal; Slope, the average of the absolute first difference of the skin conductance signal; and Events, the number of times the EDA signal increases more than 0.05 μS in less than 5 seconds. These features are inspired by . Blood Volume Pulse (BVP): BVP is measured by a photoplethysmogram (PPG) sensor, which uses optical technique to measure the blood flow rate and blood volume. Blood pressure is then estimated using BVP readings . Elevated cardiovascular activity due to higher activation changes the BVP. We calculated three features from the raw BVP signal to measure activation: the average of the absolute value of the BVP; the minimum value of BVP, and the maximum value of BVP. These features are inspired by . Skin Temperature (TEMP): TEMP generally increases during periods of high activation. The changes in skin temperature depend on the specific body region being measured and changes in room temperature. We assume that changes in room temperature occur much more slowly than the sudden onset of events that lead to high activation. We compute the instantaneous rate of change in TEMP to capture the change in levels of PP activation. Once we extracted these features as a column vector y t ∈ R d at each time t for a trader, we computed the aggregate measure of PP activation using the Mahalanobis distance d M , d M ( y t ) = ( y t - μ t ) T Σ t - 1 ( y t - μ t ) (1) where the column vector μ t ∈ R d and matrix Σ t ∈ R d × d are the rolling sample mean and covariance matrix computed using y 1 , …, y t . The superscript T denotes vector transposition. We use the rolling mean and covariance to prevent the look-ahead bias of computing PP activation at time t using signals measured in future time t ′ > t . The Mahalanobis distance captures the trader’s PP activation as the collective deviation of physiological activities y t from their baseline state μ t and adjusts for the covariance between the components of y t , which tend to be highly correlated if they are extracted from the same underlying physiological signal measured by the wearable device (shown in Table G of ). Using the Mahalanobis distance, we computed five different measures of PP activation: the overall PP activation (using features extracted from all physiological signals), and the individual HRV, EDA, BVP, and TEMP activation (using features extracted from one physiological signal) for every trader. In the subsequent analysis, we focus on the overall PP activation since it captures the collective excitation of all physiological activities simultaneously. Next, we identify the time periods when each trader is under mild and extreme levels of PP activation by measuring their deviations from mean. We label the trader as “mildly activated” at time t if her PP activation d M ( y t ) satisfies d M ( y t ) > μ + 1.5 σ where μ and σ denote the mean and standard deviation of her PP activation during the trading day. Details of computing μ and σ are provided in Section F in . Similarly, we label the trader as “extremely activated” at time t if d M ( y t )> μ + 3 σ . We choose thresholds 1.5 and 3 based on qualitative assessment of the magnitude of mild and extreme activation. Future extensions of our work may use data-driven methods to identify periods of mild and extreme activation for each trader. Finally, we use three aggregate metrics to characterize the PP activation patterns of each trader: activation proportion, activation length, and average activation. Activation proportion measures the percentage of time when the trader was at a mild or extreme level of activation during a trading day. Traders with higher activation proportion tended to be more susceptible to elevated activation. Activation length measures the average duration of the mild or extreme activation periods of a trader. Traders with longer activation length tended to remain activated for longer periods of time after the sudden onset of events which triggered activation. Average activation measures the average value of PP activation over the trading day and reflects both activation proportion and length. Traders with higher activation proportion or length also have higher average activation. These aggregate measures allowed us to directly compare the PP activation patterns across different traders and trading days. The definition and computation of activation proportion and activation length are described in Section F in . Activation level attribution regression To identify the relationship between individual factors and the PP activation of the traders, we perform the following regression: y i , t = α + β * b i s i + γ * g e n i + δ * a m t i , t + θ * e x p i + ζ * n u m T i , t + η * v o l i , t (2) where y i , t denotes the activation metric of the trader i on day t , bis i the business division (one-hot encoded), gen i the gender (male/female), amt i , t the average dollar value of the transactions by trader i on day t , and numT i , t the number of transactions by the trader on day t . Since traders’ activation may also be correlated with market volatility on different days, we also include vol i , t , the vector of volatility of different market indices on day t . The right-hand-side explanatory variables of the regression were extracted from trader survey, financial market indices, and traders’ transaction data. We use the average activation as the left-hand-side dependent variable y i , t since it reflects the other activation metrics (proportion and length). Granger causality test We test whether there is Granger causality relation between fluctuations of market indices and trader’s PP activation. Specifically, let y t and m t denote the minute-level time series of a trader’s PP activation and a market index on a trading day. Since the activation time series is measured with 1Hz (higher frequency than market indices), we perform frequency averaging on y t to get the minute-level activation time series y ¯ t . To account for non-stationarity in the time series, we perform the Granger causality test on the first difference Δ y ¯ t = y ¯ t - y ¯ t - 1 and Δm t = m t − m t −1 . For each trader and each market index on a given trading day, we test the null hypothesis that Δ m t does not Granger-cause Δ y ¯ t . The Granger causality test is performed with the sum of squared residual (SSR) test with chi-squared distribution using the statsmodels package in the SciPy library and a lag of 10 minutes. To correct for multiple hypothesis testing across all traders, we perform the Holm-Bonferroni correction for each market index with significance level 0.05. Activation around financial transactions We analyzed the pattern of trader’s PP activation around financial transactions. Specifically, we computed the evolution of PP activation using a rolling window before and after placing a trade in the following way: We first calculated the z-scores of PP activation z t so that the activation levels may be consistently compared across different time periods and traders. Next, for a transaction at time l placed by the trader, we calculated the mean values of z t in the windows [ z l −30 min , z l − 25 min ], [ z l − 25 min , z l − 20 min ], …, [ z l + 20 min , z l + 25 min ], [ z l + 25 min , z l + 30 min ]. This computed the time series of activation averaged over 5 minute windows in 30 minutes timeframe before and after performing the transaction. Finally, for each trader in a trading day, we averaged the rolling window calculation results across all transactions made by this trader on this day. We computed the evolution of PP activation around a transaction by averaging the results across all traders and trading days. Individual trader meetings The 55 traders who participated in our study had diverse levels of trading experience and traded a variety of financial products. Since the physiological signals were recorded during their normal working hours, their PP activation levels may have been affected by idiosyncratic sources that were exogenous to their trading activities and unknown to us. To investigate these idiosyncratic sources of PP activation and relate them to our findings, we invited all participating traders to meet individually with us after we completed the PP activation analysis. Due to traders’ availability, 14 of 55 traders participated in the individual meetings. During each meeting, we presented the individual findings to the trader and received the feedback from the trader. We applied the insights from these trader meetings to identify additional sources of PP activation. Ethics statement The study was approved by the following IRB/ethics committee: MIT Committee on the Use of Humans as Experimental Subjects (COUHES). The study approval number is 0403000144 and the written consent was obtained prior to the study. From the physiological signals of the traders, we constructed a synthetic metric to characterize each trader’s PP activation, defined as the physiological response to emotions like excitement, stress, and irritation. We chose to measure PP activation since it is ubiquitous in the context of financial risk processing. Events such as high market volatility, sudden gain or loss in the trader’s portfolio, or receiving instructions from clients to place transactions may induce elevated activation in traders. Previous studies showed that a high level of activation is associated with reduced performance . In addition, prolonged periods of high activation have an adverse impact on health . We extract PP activation from the physiological features in the following way: Heart Rate Variability (HRV): HRV measures the variations in the beat-to-beat intervals. Inspired by , we calculated four features from the raw HRV signal to measure activation: mHR, the average number of beats per minute; mRRi, the average value of the interbeat (RR) interval; SDNN, the standard deviation of the RR interval; and RMSSD, the root mean square sum of the successive interbeat interval difference. All features are calculated as time series using a 5-minute trailing window, similar to . Electrodermal Activity (EDA): EDA measures the electrical conductivity of the skin. Elevated activation increases the skin temperature and sweating, which in turn changes the EDA. We calculated three features from the raw EDA signal to measure activation: mAmp, the average of the raw skin conductance signal; Slope, the average of the absolute first difference of the skin conductance signal; and Events, the number of times the EDA signal increases more than 0.05 μS in less than 5 seconds. These features are inspired by . Blood Volume Pulse (BVP): BVP is measured by a photoplethysmogram (PPG) sensor, which uses optical technique to measure the blood flow rate and blood volume. Blood pressure is then estimated using BVP readings . Elevated cardiovascular activity due to higher activation changes the BVP. We calculated three features from the raw BVP signal to measure activation: the average of the absolute value of the BVP; the minimum value of BVP, and the maximum value of BVP. These features are inspired by . Skin Temperature (TEMP): TEMP generally increases during periods of high activation. The changes in skin temperature depend on the specific body region being measured and changes in room temperature. We assume that changes in room temperature occur much more slowly than the sudden onset of events that lead to high activation. We compute the instantaneous rate of change in TEMP to capture the change in levels of PP activation. Once we extracted these features as a column vector y t ∈ R d at each time t for a trader, we computed the aggregate measure of PP activation using the Mahalanobis distance d M , d M ( y t ) = ( y t - μ t ) T Σ t - 1 ( y t - μ t ) (1) where the column vector μ t ∈ R d and matrix Σ t ∈ R d × d are the rolling sample mean and covariance matrix computed using y 1 , …, y t . The superscript T denotes vector transposition. We use the rolling mean and covariance to prevent the look-ahead bias of computing PP activation at time t using signals measured in future time t ′ > t . The Mahalanobis distance captures the trader’s PP activation as the collective deviation of physiological activities y t from their baseline state μ t and adjusts for the covariance between the components of y t , which tend to be highly correlated if they are extracted from the same underlying physiological signal measured by the wearable device (shown in Table G of ). Using the Mahalanobis distance, we computed five different measures of PP activation: the overall PP activation (using features extracted from all physiological signals), and the individual HRV, EDA, BVP, and TEMP activation (using features extracted from one physiological signal) for every trader. In the subsequent analysis, we focus on the overall PP activation since it captures the collective excitation of all physiological activities simultaneously. Next, we identify the time periods when each trader is under mild and extreme levels of PP activation by measuring their deviations from mean. We label the trader as “mildly activated” at time t if her PP activation d M ( y t ) satisfies d M ( y t ) > μ + 1.5 σ where μ and σ denote the mean and standard deviation of her PP activation during the trading day. Details of computing μ and σ are provided in Section F in . Similarly, we label the trader as “extremely activated” at time t if d M ( y t )> μ + 3 σ . We choose thresholds 1.5 and 3 based on qualitative assessment of the magnitude of mild and extreme activation. Future extensions of our work may use data-driven methods to identify periods of mild and extreme activation for each trader. Finally, we use three aggregate metrics to characterize the PP activation patterns of each trader: activation proportion, activation length, and average activation. Activation proportion measures the percentage of time when the trader was at a mild or extreme level of activation during a trading day. Traders with higher activation proportion tended to be more susceptible to elevated activation. Activation length measures the average duration of the mild or extreme activation periods of a trader. Traders with longer activation length tended to remain activated for longer periods of time after the sudden onset of events which triggered activation. Average activation measures the average value of PP activation over the trading day and reflects both activation proportion and length. Traders with higher activation proportion or length also have higher average activation. These aggregate measures allowed us to directly compare the PP activation patterns across different traders and trading days. The definition and computation of activation proportion and activation length are described in Section F in . To identify the relationship between individual factors and the PP activation of the traders, we perform the following regression: y i , t = α + β * b i s i + γ * g e n i + δ * a m t i , t + θ * e x p i + ζ * n u m T i , t + η * v o l i , t (2) where y i , t denotes the activation metric of the trader i on day t , bis i the business division (one-hot encoded), gen i the gender (male/female), amt i , t the average dollar value of the transactions by trader i on day t , and numT i , t the number of transactions by the trader on day t . Since traders’ activation may also be correlated with market volatility on different days, we also include vol i , t , the vector of volatility of different market indices on day t . The right-hand-side explanatory variables of the regression were extracted from trader survey, financial market indices, and traders’ transaction data. We use the average activation as the left-hand-side dependent variable y i , t since it reflects the other activation metrics (proportion and length). We test whether there is Granger causality relation between fluctuations of market indices and trader’s PP activation. Specifically, let y t and m t denote the minute-level time series of a trader’s PP activation and a market index on a trading day. Since the activation time series is measured with 1Hz (higher frequency than market indices), we perform frequency averaging on y t to get the minute-level activation time series y ¯ t . To account for non-stationarity in the time series, we perform the Granger causality test on the first difference Δ y ¯ t = y ¯ t - y ¯ t - 1 and Δm t = m t − m t −1 . For each trader and each market index on a given trading day, we test the null hypothesis that Δ m t does not Granger-cause Δ y ¯ t . The Granger causality test is performed with the sum of squared residual (SSR) test with chi-squared distribution using the statsmodels package in the SciPy library and a lag of 10 minutes. To correct for multiple hypothesis testing across all traders, we perform the Holm-Bonferroni correction for each market index with significance level 0.05. We analyzed the pattern of trader’s PP activation around financial transactions. Specifically, we computed the evolution of PP activation using a rolling window before and after placing a trade in the following way: We first calculated the z-scores of PP activation z t so that the activation levels may be consistently compared across different time periods and traders. Next, for a transaction at time l placed by the trader, we calculated the mean values of z t in the windows [ z l −30 min , z l − 25 min ], [ z l − 25 min , z l − 20 min ], …, [ z l + 20 min , z l + 25 min ], [ z l + 25 min , z l + 30 min ]. This computed the time series of activation averaged over 5 minute windows in 30 minutes timeframe before and after performing the transaction. Finally, for each trader in a trading day, we averaged the rolling window calculation results across all transactions made by this trader on this day. We computed the evolution of PP activation around a transaction by averaging the results across all traders and trading days. The 55 traders who participated in our study had diverse levels of trading experience and traded a variety of financial products. Since the physiological signals were recorded during their normal working hours, their PP activation levels may have been affected by idiosyncratic sources that were exogenous to their trading activities and unknown to us. To investigate these idiosyncratic sources of PP activation and relate them to our findings, we invited all participating traders to meet individually with us after we completed the PP activation analysis. Due to traders’ availability, 14 of 55 traders participated in the individual meetings. During each meeting, we presented the individual findings to the trader and received the feedback from the trader. We applied the insights from these trader meetings to identify additional sources of PP activation. The study was approved by the following IRB/ethics committee: MIT Committee on the Use of Humans as Experimental Subjects (COUHES). The study approval number is 0403000144 and the written consent was obtained prior to the study. We present the analysis results of traders’ PP activation and investigate the different sources that may cause elevations in PP activation levels. We discuss both general patterns of trader’s PP activation observed across all traders and idiosyncratic characteristics specific to individual traders. Variation in trader’s activation levels We analyzed the variation in PP activation patterns across different traders as reflected by four activation metrics: average activation, mild activation proportion, extreme activation proportion, and activation length. shows the histograms of these metrics for all traders. A trader’s PP activation metric on each trading day is counted separately in the histogram to account for inter-day variation and there are a total of 242 trader-day pairs in each histogram. The summary statistics of each histogram is shown in . We observed significant variation in all PP activation metrics across the traders. Several traders were at mild activation for 17% of working hours on certain days, while others were at mild activation for less than 8% of the time. Similarly, some traders returned to their normal activation levels in less than 30 seconds from the onset of elevated activation, while others took as long as 17 minutes. The detailed statistics of PP activation metrics for each trader is summarized in Table E in . The significant variation in PP activation metrics can partly be ascribed to the diverse characteristics of the traders in various aspects, which motivates us to investigate the systematic and idiosyncratic factors which lead to the variation in PP activation levels. Factors influencing PP activation We performed the regression to identify the relationship between different factors and the PP activation of traders. The dependent variable of the regression y i , t is the average PP activation of trader i on day t . The correlations between the covariates of the regression are summarized in Table F in . Gender We do not find statistically significant relationship between the average PP activation and the gender of a trader (difference between the regression coefficients of male vs. female: −0.02, p-value: 0.83). Trading experience We find that traders with more trading experience have lower average PP activation (coefficient: −0.02, p-value: 0.01). The possible explanation is two-fold: traders may become more adept at making risky financial decisions under uncertainty with little emotional fluctuations or may acquire certain skills over time to actively manage their PP activation levels during financial risk processing. Dollar volume, number of transactions, and market volatility The average dollar volume and number of transactions have positive regression coefficients 0.02 (p-value: 0.26) and 0.4 e − 4 (p-value: 0.47), respectively. While we observed an overall positive correlation between the dollar volume, number of transactions and the average PP activation of the traders, additional factors of their trading activities might further influence whether the transaction causes high or low PP activation and lead to the large p-values. During individual trader interviews, we found that traders’ PP activation highly depends on type of transaction (via a client’s order or the trader’s own decision) and the changes in risk exposure as a result of the transaction. Similarly, the regression coefficients of volatility of different market indices are not statistically significant. This motivates us to analyze the time series characteristics of market fluctuations and traders’ PP activation via the Granger causality test discussed later. Other factors We also found statistically significant relations between the trader’s average PP activation and the type of financial products traded. A detailed discussion of these results is provided in Section H in . During individual trader meetings, traders reported additional idiosyncratic factors that influenced their PP activation, including whether they had a busy schedule, their managerial responsibilities, and even the anticipation of the social events after work. These idiosyncratic factors were not measured in our experiment and may contribute to the variation in traders’ PP activation unexplained by the regression. Market fluctuations have causal impacts on traders’ PP activation Typically, each trader manages a portfolio of many active positions and monitors a variety of market indices and events during the trading hours. It is reasonable to expect that increased volatility in market indices relevant to their portfolio values may cause the traders’ PP activation levels to become elevated. We tested the null hypothesis that market fluctuations do not have causal influence on traders’ PP activation via the Granger causality test, with Holm-Bonferroni correction at 5% significance level. To test whether the observed Granger causality relations between market index and PP activation are statistically significant, we plot the distribution of p-values for each pair of PP activation and market index in Figure D in . Under the null hypothesis H 0 that the market index does not Granger-cause traders’ PP activation, the distribution of p-values of Granger tests should follow a uniform distribution on [0, 1]. We perform a one-sample Kolmogorov–Smirnov test and reject the null hypothesis H 0 for six of the eight market indices (Credit Default Swap Index Investment Grade (IG), Credit Default Swap Index High Yield (HY), S&P 500 E-mini Futures Price, 10Y US Treasury Futures Price, 5Y US Treasury Futures Price and Crude Oil Futures Price) at significance level 0.05. The number of statistically significant Granger causality relations between market fluctuations and traders’ PP activation is shown in . We observe that the two credit default swap (CDS) indices were the most common sources of elevated PP activation for the traders. Since CDS contracts act as insurance against credit defaults, it is reasonable that fluctuations in CDS indices have systematic impact on multiple financial products and elevate the PP activation levels among the largest number of traders. Financial transactions elevate traders’ PP activation Traders regularly engage in financial transactions which induce considerable risks to their PnL performance. Real-time financial risk processing during a transaction can be a major source of high PP activation for the traders. Since transactions are placed at irregular points in time, we performed an event study to investigate the changes in trader’s PP activation levels before and after a transaction, as described in Methods and Materials Section. shows the evolution of the average PP activation levels around a transaction, where the results are averaged across all traders. The vertical axis represents the average PP activation and horizontal axis indicates five-minute intervals around the transaction, where the transaction time is defined at t = 0. The left half of the figure corresponds to PP activation levels before the transaction, and the right half to those after. To account for the variation across traders, we also plot the 95% confidence band of PP activation. We observe that traders tended to have the highest PP activation in the time window of 15 to 25 minutes after the transaction. We also observe a local peak of high PP activation 5 minutes prior to the transaction. During individual trader interviews, traders pointed out that their PP activation levels depended on the risk exposure of the transaction, as well as whether the transaction was placed via a client’s order or per the trader’s own discretion. In addition, traders mentioned that it usually takes 15 to 30 minutes after a transaction to confirm whether the transaction is executed as intended and to observe its effect on the trader’s PnL. This anticipation effect partly explains the elevated PP activation levels 15 to 25 minutes after transaction, shown in . We analyzed the variation in PP activation patterns across different traders as reflected by four activation metrics: average activation, mild activation proportion, extreme activation proportion, and activation length. shows the histograms of these metrics for all traders. A trader’s PP activation metric on each trading day is counted separately in the histogram to account for inter-day variation and there are a total of 242 trader-day pairs in each histogram. The summary statistics of each histogram is shown in . We observed significant variation in all PP activation metrics across the traders. Several traders were at mild activation for 17% of working hours on certain days, while others were at mild activation for less than 8% of the time. Similarly, some traders returned to their normal activation levels in less than 30 seconds from the onset of elevated activation, while others took as long as 17 minutes. The detailed statistics of PP activation metrics for each trader is summarized in Table E in . The significant variation in PP activation metrics can partly be ascribed to the diverse characteristics of the traders in various aspects, which motivates us to investigate the systematic and idiosyncratic factors which lead to the variation in PP activation levels. We performed the regression to identify the relationship between different factors and the PP activation of traders. The dependent variable of the regression y i , t is the average PP activation of trader i on day t . The correlations between the covariates of the regression are summarized in Table F in . Gender We do not find statistically significant relationship between the average PP activation and the gender of a trader (difference between the regression coefficients of male vs. female: −0.02, p-value: 0.83). Trading experience We find that traders with more trading experience have lower average PP activation (coefficient: −0.02, p-value: 0.01). The possible explanation is two-fold: traders may become more adept at making risky financial decisions under uncertainty with little emotional fluctuations or may acquire certain skills over time to actively manage their PP activation levels during financial risk processing. Dollar volume, number of transactions, and market volatility The average dollar volume and number of transactions have positive regression coefficients 0.02 (p-value: 0.26) and 0.4 e − 4 (p-value: 0.47), respectively. While we observed an overall positive correlation between the dollar volume, number of transactions and the average PP activation of the traders, additional factors of their trading activities might further influence whether the transaction causes high or low PP activation and lead to the large p-values. During individual trader interviews, we found that traders’ PP activation highly depends on type of transaction (via a client’s order or the trader’s own decision) and the changes in risk exposure as a result of the transaction. Similarly, the regression coefficients of volatility of different market indices are not statistically significant. This motivates us to analyze the time series characteristics of market fluctuations and traders’ PP activation via the Granger causality test discussed later. Other factors We also found statistically significant relations between the trader’s average PP activation and the type of financial products traded. A detailed discussion of these results is provided in Section H in . During individual trader meetings, traders reported additional idiosyncratic factors that influenced their PP activation, including whether they had a busy schedule, their managerial responsibilities, and even the anticipation of the social events after work. These idiosyncratic factors were not measured in our experiment and may contribute to the variation in traders’ PP activation unexplained by the regression. We do not find statistically significant relationship between the average PP activation and the gender of a trader (difference between the regression coefficients of male vs. female: −0.02, p-value: 0.83). We find that traders with more trading experience have lower average PP activation (coefficient: −0.02, p-value: 0.01). The possible explanation is two-fold: traders may become more adept at making risky financial decisions under uncertainty with little emotional fluctuations or may acquire certain skills over time to actively manage their PP activation levels during financial risk processing. The average dollar volume and number of transactions have positive regression coefficients 0.02 (p-value: 0.26) and 0.4 e − 4 (p-value: 0.47), respectively. While we observed an overall positive correlation between the dollar volume, number of transactions and the average PP activation of the traders, additional factors of their trading activities might further influence whether the transaction causes high or low PP activation and lead to the large p-values. During individual trader interviews, we found that traders’ PP activation highly depends on type of transaction (via a client’s order or the trader’s own decision) and the changes in risk exposure as a result of the transaction. Similarly, the regression coefficients of volatility of different market indices are not statistically significant. This motivates us to analyze the time series characteristics of market fluctuations and traders’ PP activation via the Granger causality test discussed later. We also found statistically significant relations between the trader’s average PP activation and the type of financial products traded. A detailed discussion of these results is provided in Section H in . During individual trader meetings, traders reported additional idiosyncratic factors that influenced their PP activation, including whether they had a busy schedule, their managerial responsibilities, and even the anticipation of the social events after work. These idiosyncratic factors were not measured in our experiment and may contribute to the variation in traders’ PP activation unexplained by the regression. Typically, each trader manages a portfolio of many active positions and monitors a variety of market indices and events during the trading hours. It is reasonable to expect that increased volatility in market indices relevant to their portfolio values may cause the traders’ PP activation levels to become elevated. We tested the null hypothesis that market fluctuations do not have causal influence on traders’ PP activation via the Granger causality test, with Holm-Bonferroni correction at 5% significance level. To test whether the observed Granger causality relations between market index and PP activation are statistically significant, we plot the distribution of p-values for each pair of PP activation and market index in Figure D in . Under the null hypothesis H 0 that the market index does not Granger-cause traders’ PP activation, the distribution of p-values of Granger tests should follow a uniform distribution on [0, 1]. We perform a one-sample Kolmogorov–Smirnov test and reject the null hypothesis H 0 for six of the eight market indices (Credit Default Swap Index Investment Grade (IG), Credit Default Swap Index High Yield (HY), S&P 500 E-mini Futures Price, 10Y US Treasury Futures Price, 5Y US Treasury Futures Price and Crude Oil Futures Price) at significance level 0.05. The number of statistically significant Granger causality relations between market fluctuations and traders’ PP activation is shown in . We observe that the two credit default swap (CDS) indices were the most common sources of elevated PP activation for the traders. Since CDS contracts act as insurance against credit defaults, it is reasonable that fluctuations in CDS indices have systematic impact on multiple financial products and elevate the PP activation levels among the largest number of traders. Traders regularly engage in financial transactions which induce considerable risks to their PnL performance. Real-time financial risk processing during a transaction can be a major source of high PP activation for the traders. Since transactions are placed at irregular points in time, we performed an event study to investigate the changes in trader’s PP activation levels before and after a transaction, as described in Methods and Materials Section. shows the evolution of the average PP activation levels around a transaction, where the results are averaged across all traders. The vertical axis represents the average PP activation and horizontal axis indicates five-minute intervals around the transaction, where the transaction time is defined at t = 0. The left half of the figure corresponds to PP activation levels before the transaction, and the right half to those after. To account for the variation across traders, we also plot the 95% confidence band of PP activation. We observe that traders tended to have the highest PP activation in the time window of 15 to 25 minutes after the transaction. We also observe a local peak of high PP activation 5 minutes prior to the transaction. During individual trader interviews, traders pointed out that their PP activation levels depended on the risk exposure of the transaction, as well as whether the transaction was placed via a client’s order or per the trader’s own discretion. In addition, traders mentioned that it usually takes 15 to 30 minutes after a transaction to confirm whether the transaction is executed as intended and to observe its effect on the trader’s PnL. This anticipation effect partly explains the elevated PP activation levels 15 to 25 minutes after transaction, shown in . Our analysis confirms that affect, as evidenced by the fluctuations of PP activation due to market fluctuations or financial transactions, plays a prominent role in the financial risk processing and decision making process for professional traders in their daily trading activities . Our results provide contextual support for the growing literature of affective science investigating the relations between emotions and decision making in the case of professional traders, whose profession requires them to make rational decisions under large uncertainties and maximize their payoffs. Our study also illustrates the feasibility of conducting large field experiments in affective sciences with non-disruptive physiological measurement and the rich insights one can extract from this data using appropriate statistical techniques. For example, by utilizing the time series properties of PP activation, we overcome the challenge of performing causal inferences with observational data and demonstrate the causal relationship between certain market indices and traders’ PP activation. The novel use of Mahalanobis distance to capture the aggregate PP activation from individual physiological signals may be widely applied in many other contexts in this field. Our analysis has several limitations to be addressed in future works. First, while we observed the presence of affect in financial risk processing, our analysis does not distinguish between positive and negative affects or investigate whether affect constitutes a beneficial or adverse driver of the traders’ financial performance. Previous studies revealed that positive affect leads to improved decision making while negative affect such as fear and anger distorts the perception of risk and causes myopic decisions . During individual trader meetings, 12 of the 14 traders pointed out that they typically experienced high levels of activation when their real-time trading performance (measured by PnL) exhibited losses or fluctuations. Future study may analyze the relation between traders’ PnL performance and their PP activation under various market conditions using appropriate statistical or machine learning techniques . Using such findings, one may even quantify the impact of affect on the trader’s PnL and prescribe strategies for traders to actively manage their affect and improve their trading performance. In addition, while the field experiment design allows us to faithfully measure the physiological and affective state of professional traders in their natural working environment, it also prevents us from observing and controlling all the factors which influence the trader’s affect. As a result, our analysis does not make the important distinction between integral emotion (the emotion induced by financial risk processing which is our main interest) and incidental emotion (the emotion carried over from non-trading situations such as performing managerial tasks or participating in work meetings) . This limits the interpretation of our findings since previous studies showed that incidental emotions affect risk perception and induce bias on the financial decisions . Future studies may improve the experimental design by asking participants to wear the measurement device only during trading activities or report the time periods when they are mainly occupied with non-trading activities. While this improves the statistical inference, researchers must also ensure that participants adhere to the experiment protocols. Finally, from an evolutionary biology perspective, professional traders must adapt to their highly competitive, dynamic and uncertain working environment where significant portions of their compensation depend on their PnL. It is natural to hypothesize that the biological and psychological mechanisms of financial risk processing are markedly different between professional traders and the rest of the population . Another interesting extension of our work is to conduct a similar experiment with a control group of subjects with little or no trading experience. In this way, one can determine the unique characteristics of professional traders’ cognitive and emotional faculties which enable them to thrive in the highly dynamic and uncertain working environment. We conducted a large field experiment and measured the real-time physiological signals of 55 professional traders during their normal trading activities over a five-day period. Using a novel metric of PP activation based on the Mahalanobis distance, we found large variations in PP activation across traders. We showed that the PP activation is correlated with and influenced by multiple factors, such as market fluctuations, financial transactions, as well as trader’s experience and types of financial products traded. Our analysis confirms the prominent role of affect in financial risk processing for professional traders. Future studies may analyze the impact of traders’ affect on their trading performance and the difference in affect between professional traders and subjects with no trading experience when making risky decisions under uncertainty. S1 Appendix (ZIP) Click here for additional data file.
Evolution of the Florida Pediatric Bone Marrow Transplant and Cell Therapy Consortium (
5f3f9a87-761b-4fc5-b329-9fd6db0a88ad
11883450
Surgical Procedures, Operative[mh]
Introduction Bone marrow transplantation (BMT) and cell therapies (CT) are complex, life‐saving procedures for many pediatric hematologic, oncologic, metabolic, and immunologic disorders. Children undergoing BMT and/or CT require care in specialized centers. The costs and the burden of care for these procedures are significant. At the time of initiation of the Florida Pediatric Bone Marrow Transplant (BMT) and Cell Therapy (CT) Consortium (FPBCC), there were six BMT/CT centers in the state of Florida, providing services to approximately 4.1 million children under the age of 18 years. The six centers combined performed approximately 80 allogeneic and 50 autologous transplants annually (BMT Infonet data 2014–2015). The 1‐year survival of children undergoing BMT in Florida was in line with expected survival rates; however, it was lower than in the nation's leading pediatric BMT centers. Having small programs distributed across the state improves access to care and it may improve the patient's experience; however, it is described that survival is better when transplants are done in centers with disease‐specific expertise, which tend to perform a larger number of transplants annually . In addition to the lack of disease‐specific expertise, due to the small number of transplants at each institution, analyses of individual centers' outcomes are skewed due to the small number of patients and are often insufficient for data‐driven implementation of practice change. To overcome some of the disadvantages of small BMT/CT centers, we have organized a state‐wide pediatric consortium focused on the improvement of pediatric BMT/CT outcomes in Florida through clinical collaboration, data sharing, implementation of best practices, QI projects, and prospective clinical trials. To evaluate the effectiveness of consortium activities, we analyzed and reported the change in CIBMTR‐reported 1‐year survival outcomes of centers participating in the consortium for the period immediately prior to consortium activities (2016–2018) and for the subsequent 3‐year period (2019–2021), in order to compare the change in FPBCC outcomes to other pediatric transplant programs in the US that were not participating in the outcomes‐oriented consortium. Methods 2.1 Description of FPBCC Activities Targeting Improvement of Outcomes Activities related to organizing FPBCC were initiated in April 2018. Two phone conferences with representatives from five of six centers were held to establish the interest of BMT/CT programs in collaboration and foundation of the consortium, and to prepare for an in‐person meeting. On September 29, 2018, the first FPBCC in‐person meeting, with representatives from the five participating centers, was held in St. Petersburg, FL. At that meeting, consortium goals were defined as follows: (1) provide administrative and bioinformatics infrastructure for data sharing and statistical analyses, (2) facilitate collaboration among members by organizing regular video conferences and annual meetings, (3) support identification of BMT/CT‐specific quality indicators and support QI projects leading to the implementation of best clinical practices, and (4) facilitate the development of investigator‐initiated clinical trials (IIT) in the field of BMT/CT. A Florida Department of Health Bankhead‐Coley (BHC) Cancer Research Program 2018–2019 grant application was submitted in the fall of 2018. Although the BHC grant was not awarded, the blueprint outlining proposed consortium activities in the grant application was followed. The consortium activities were funded through three Children's Miracle Network grants. Five out of six pediatric BMT/CT programs in Florida signed data use agreements and memorandum of understanding describing the rules of participation in the consortium and data sharing and formed the FPBCC. All institutions obtained IRB approvals/exemptions for data sharing and for each of the retrospective analyses of pooled data that were performed by the consortium. Activities of the consortium included: (1) monthly 1‐h video conferences, which started in November 2018 and are ongoing. The meetings are open to physicians and all other interested BMT/CT staff. During these conferences, results of retrospective analyses, quality improvement projects, and preparations for a prospective study were discussed. The participants also discussed clinically challenging patients and shared their transplant‐related practices; (2) retrospective analyses were performed on pooled data. Each center downloaded its data from the CIBMTR platform and submitted it to the Consortium for joint analyses of outcomes. The goal of the retrospective analysis was to identify risk factors for survival and to identify areas for improvement. First, outcomes for a 3‐year cohort (2014–2016) were analyzed; subsequently, retrospective analyses were performed by disease category, including 10‐year data (2010–2019). The third retrospective study gathered additional data from centers related to outcomes of patients with severe aplastic anemia and data on disease‐free‐graft‐versus‐host free survival of patients transplanted for malignant disorders for a 5‐year period (2016–2020). Two quality improvement projects were implemented consortium wide. The first one was related to improvement in the accuracy of reporting of causes of death to the CIBMTR, and the second one introduced a change in donor selection criteria based on the results of survival by donor type from the 10‐year retrospective data; and (3) a prospective phase II clinical trial targeting high‐risk patients with hematologic malignancies was opened through the consortium and is currently enrolling patients. As prespecified in the consortium blueprint, the effectiveness of combined consortium activities was assessed by comparing 1‐year survival of patients receiving allogeneic transplant for a 3‐year period immediately prior to the initiation of consortium (2016–2018) and the 3‐year period after initiation of consortium activities (2019–2021). Figure depicts consortium activity from inception . 2.2 Methods for the Analysis of the Effectiveness of FPBCC Activities Data for the analysis were obtained from the CIBMTR annual Transplant Center‐Specific Survival Reports published in 2020 (for 2016–2018 period) and in 2023 (for 2019–2021 period). One center (UF), which reports combined pediatric/adult outcomes, provided their actual 1‐year survival of recipients of first allogeneic transplants because their data could not be obtained from the Annual Transplant Center‐Specific Reports. The mean predicted survival reported from the CIBMTR for the four Florida transplant centers reporting pediatric data was used for the entire FPBCC, assuming that patient characteristics were similar across the five Florida pediatric programs. Five FPBCC centers and 38 other pediatric centers were compared to each other over the years 2016–2018 (preconsortium activities or pre) and 2019–2021 (postconsortium activities or post). The 38 other centers included all pediatric centers with > 20 allogeneic transplants over the 3‐year period and had outcomes reported for both periods. Pediatric transplant centers that report outcomes to CIBMTR together with adult data could not be included in this analysis. Subsequently, the 38 transplant centers used in comparison were divided into 22 small‐size centers (20–70 first allogeneic transplants per center in a 3‐year period) and 16 large‐size centers (≥ 71 first allogeneic transplants in a 3‐year period). All FPBCC centers belonged to the small‐size group. Paired t‐tests were used to compare pre‐ and post‐1‐year survival of patients undergoing first allogeneic transplants, as well as the number of transplants. In addition, the predicted survival established by the CIBMTR was compared to the center's actual survival, and the difference between the two was tested by using the Mann–Whitney test. Kruskal–Wallis test was used to compare survival differences in FPBCC, other small, and large centers. Descriptive statistics were used to summarize survival, differences between actual and predicted survival, and the number of transplants. We present the following mean ± SE results: Actual 1‐year survival during the two periods (pre and post) for FPBCC centers and all other centers (Figure ) and statistical testing for intragroup improvement and across the two groups improvement. Actual 1‐year survival during the two periods (pre and post) FPBCC and other small and large centers with calculating p ‐value for intracenter change as well as for change between FPBCC and small centers and FPBCC and large centers (Figure ). Difference between the actual and predicted survival for FPBCC centers versus all other centers (Figure ) and FPBCC versus small centers, and large centers (Figure ) and statistical testing of the magnitude of that difference. Mean ± SE of transplant numbers for the 3‐year period for FPBCC centers and all other centers (Figure ); and between FPBCC centers and small centers and large centers. Description of FPBCC Activities Targeting Improvement of Outcomes Activities related to organizing FPBCC were initiated in April 2018. Two phone conferences with representatives from five of six centers were held to establish the interest of BMT/CT programs in collaboration and foundation of the consortium, and to prepare for an in‐person meeting. On September 29, 2018, the first FPBCC in‐person meeting, with representatives from the five participating centers, was held in St. Petersburg, FL. At that meeting, consortium goals were defined as follows: (1) provide administrative and bioinformatics infrastructure for data sharing and statistical analyses, (2) facilitate collaboration among members by organizing regular video conferences and annual meetings, (3) support identification of BMT/CT‐specific quality indicators and support QI projects leading to the implementation of best clinical practices, and (4) facilitate the development of investigator‐initiated clinical trials (IIT) in the field of BMT/CT. A Florida Department of Health Bankhead‐Coley (BHC) Cancer Research Program 2018–2019 grant application was submitted in the fall of 2018. Although the BHC grant was not awarded, the blueprint outlining proposed consortium activities in the grant application was followed. The consortium activities were funded through three Children's Miracle Network grants. Five out of six pediatric BMT/CT programs in Florida signed data use agreements and memorandum of understanding describing the rules of participation in the consortium and data sharing and formed the FPBCC. All institutions obtained IRB approvals/exemptions for data sharing and for each of the retrospective analyses of pooled data that were performed by the consortium. Activities of the consortium included: (1) monthly 1‐h video conferences, which started in November 2018 and are ongoing. The meetings are open to physicians and all other interested BMT/CT staff. During these conferences, results of retrospective analyses, quality improvement projects, and preparations for a prospective study were discussed. The participants also discussed clinically challenging patients and shared their transplant‐related practices; (2) retrospective analyses were performed on pooled data. Each center downloaded its data from the CIBMTR platform and submitted it to the Consortium for joint analyses of outcomes. The goal of the retrospective analysis was to identify risk factors for survival and to identify areas for improvement. First, outcomes for a 3‐year cohort (2014–2016) were analyzed; subsequently, retrospective analyses were performed by disease category, including 10‐year data (2010–2019). The third retrospective study gathered additional data from centers related to outcomes of patients with severe aplastic anemia and data on disease‐free‐graft‐versus‐host free survival of patients transplanted for malignant disorders for a 5‐year period (2016–2020). Two quality improvement projects were implemented consortium wide. The first one was related to improvement in the accuracy of reporting of causes of death to the CIBMTR, and the second one introduced a change in donor selection criteria based on the results of survival by donor type from the 10‐year retrospective data; and (3) a prospective phase II clinical trial targeting high‐risk patients with hematologic malignancies was opened through the consortium and is currently enrolling patients. As prespecified in the consortium blueprint, the effectiveness of combined consortium activities was assessed by comparing 1‐year survival of patients receiving allogeneic transplant for a 3‐year period immediately prior to the initiation of consortium (2016–2018) and the 3‐year period after initiation of consortium activities (2019–2021). Figure depicts consortium activity from inception . Methods for the Analysis of the Effectiveness of FPBCC Activities Data for the analysis were obtained from the CIBMTR annual Transplant Center‐Specific Survival Reports published in 2020 (for 2016–2018 period) and in 2023 (for 2019–2021 period). One center (UF), which reports combined pediatric/adult outcomes, provided their actual 1‐year survival of recipients of first allogeneic transplants because their data could not be obtained from the Annual Transplant Center‐Specific Reports. The mean predicted survival reported from the CIBMTR for the four Florida transplant centers reporting pediatric data was used for the entire FPBCC, assuming that patient characteristics were similar across the five Florida pediatric programs. Five FPBCC centers and 38 other pediatric centers were compared to each other over the years 2016–2018 (preconsortium activities or pre) and 2019–2021 (postconsortium activities or post). The 38 other centers included all pediatric centers with > 20 allogeneic transplants over the 3‐year period and had outcomes reported for both periods. Pediatric transplant centers that report outcomes to CIBMTR together with adult data could not be included in this analysis. Subsequently, the 38 transplant centers used in comparison were divided into 22 small‐size centers (20–70 first allogeneic transplants per center in a 3‐year period) and 16 large‐size centers (≥ 71 first allogeneic transplants in a 3‐year period). All FPBCC centers belonged to the small‐size group. Paired t‐tests were used to compare pre‐ and post‐1‐year survival of patients undergoing first allogeneic transplants, as well as the number of transplants. In addition, the predicted survival established by the CIBMTR was compared to the center's actual survival, and the difference between the two was tested by using the Mann–Whitney test. Kruskal–Wallis test was used to compare survival differences in FPBCC, other small, and large centers. Descriptive statistics were used to summarize survival, differences between actual and predicted survival, and the number of transplants. We present the following mean ± SE results: Actual 1‐year survival during the two periods (pre and post) for FPBCC centers and all other centers (Figure ) and statistical testing for intragroup improvement and across the two groups improvement. Actual 1‐year survival during the two periods (pre and post) FPBCC and other small and large centers with calculating p ‐value for intracenter change as well as for change between FPBCC and small centers and FPBCC and large centers (Figure ). Difference between the actual and predicted survival for FPBCC centers versus all other centers (Figure ) and FPBCC versus small centers, and large centers (Figure ) and statistical testing of the magnitude of that difference. Mean ± SE of transplant numbers for the 3‐year period for FPBCC centers and all other centers (Figure ); and between FPBCC centers and small centers and large centers. Results 3.1 Overall Survival in FPBCC and Other Programs We compared the survival of five FPBCC centers with survival from the 38 other centers over the time periods before and after the initiation of FPBCC outcome improvement activities. Before the consortium activities, 1‐year posttransplant survival was 77.5% ± 4.8 for the five centers combined, and in the postperiod, the FPBCC survival was 89.5% ± 3.4. This improvement in survival over the 3‐year period was statistically significant, with a Wilcoxon signed rank test paired t‐test two‐tailed p ‐value of 0.0313. The pre‐ and postsurvival for the other 38 pediatric centers was 83.8% ± 1.1 and 86.8% ± 0.8. That improvement over time was also statistically significant by paired t‐test, p ‐value = 0.0121. When the improvement in survival among FPBCC centers was compared to the improvement in survival among the other 38 centers, the results were statistically significant with Mann–Whitney p = 0.0065, indicating a larger magnitude of improvement among FPBCC centers than among the other 38 pediatric centers (Figure ). The 3‐year overall survival pre‐ and postperiods for nonconsortium small centers were 82.4% ± 1.5, versus 87.9% ± 1.2, with the difference in improved survival being statistically significant p = 0.0059. The outcomes for large centers for the pre‐ and postperiod were 85.6% ± 1.3 and 85.4% ± 0.9, and this difference was not statistically significant with p = 0.2676. Survival differences among FPBCC, small, and large centers were significant with the Kruskal–Wallis p ‐value = 0.0062. Results are summarized in Figure . 3.2 Difference Between Predicted Survival and Actual Survival in FPBCC and Other Centers CIBMTR predicted survival and actual survival with intragroup and intergroup p ‐value was presented for FPBCC centers and the other 38 centers (as well as the small and large centers). The FPBCC center that reported data together with the adult data lacked pediatric‐specific predicted survival data, necessitating an estimation based on the survival data from other FPBCC centers. In the years 2016–2018, the five FPBCC centers had mean actual survival that was 4.6% ± 4.5 points lower than their predicted survival, while the nationwide data showed actual survival in alignment with their predicted survivals. In the 2019–2021 period, the actual survival was 2.7% ± 3.1 and 1.4% ± 0.8 higher than predicted in FPBCC centers and all other programs, respectively. The comparison of survival between the two time periods in FPBCC centers showed that survival was statistically significant at Wilcoxon signed rank p = 0.0313. All other centers did not have a significant change in actual and predicted survival between the time periods, as expected when all data are pooled as they define predicted survival. During the years 2016–2018, other small centers had mean actual survival that was 1.4% ± 1.6 points lower than their predicted survival. In the subsequent period from 2019 to 2021, the actual survival rate among these smaller centers exceeded predictions by 2.5% ± 1.2. Meanwhile, during 2016–2018, larger centers outperformed their predicted survival rates by 1.9% ± 1.2. However, this performance saw a slight decline in the 2019–2021 period, with centers experiencing actual survival rates 0.1% ± 0.6 lower than their predicted survival. Notably, the comparison of survival rates between the two time periods for smaller centers yielded a statistically significant difference at Wilcoxon signed rank p = 0.0212. Similarly, larger centers also experienced a statistically significant decline in actual versus predicted survival between the time periods, with a Wilcoxon signed rank p = 0.0024. 3.3 Number of Transplants per 3‐Year Period FPBCC centers had a 3‐year average of 40.4 ± 8.7 transplants per center over the first reporting period, while all other transplant centers had an average of 77.2 ± 8.4 transplants for the first period. In the second reporting period, FPBCC centers had an average of 35 ± 6.5 transplants, and other centers had 74.5 ± 8.8 transplants. The decline in the number of transplants in FPBCC centers and all other centers did not represent a statistically significant change. When the change in the number of transplants was further evaluated by transplant size, small centers, excluding FPBCC centers, had a decline from 42.6 ± 3.8 to 38.5 ± 3.8 transplants over the two periods. This observed decline in transplant numbers seen in small centers was statistically significant (Wilcoxon signed rank p = 0.0266), while the large transplant centers did not have a significant decline in their numbers (124.8 ± 11.1 to 124.1 ± 12.1, Wilcoxon signed rank p = 0.4691). Overall Survival in FPBCC and Other Programs We compared the survival of five FPBCC centers with survival from the 38 other centers over the time periods before and after the initiation of FPBCC outcome improvement activities. Before the consortium activities, 1‐year posttransplant survival was 77.5% ± 4.8 for the five centers combined, and in the postperiod, the FPBCC survival was 89.5% ± 3.4. This improvement in survival over the 3‐year period was statistically significant, with a Wilcoxon signed rank test paired t‐test two‐tailed p ‐value of 0.0313. The pre‐ and postsurvival for the other 38 pediatric centers was 83.8% ± 1.1 and 86.8% ± 0.8. That improvement over time was also statistically significant by paired t‐test, p ‐value = 0.0121. When the improvement in survival among FPBCC centers was compared to the improvement in survival among the other 38 centers, the results were statistically significant with Mann–Whitney p = 0.0065, indicating a larger magnitude of improvement among FPBCC centers than among the other 38 pediatric centers (Figure ). The 3‐year overall survival pre‐ and postperiods for nonconsortium small centers were 82.4% ± 1.5, versus 87.9% ± 1.2, with the difference in improved survival being statistically significant p = 0.0059. The outcomes for large centers for the pre‐ and postperiod were 85.6% ± 1.3 and 85.4% ± 0.9, and this difference was not statistically significant with p = 0.2676. Survival differences among FPBCC, small, and large centers were significant with the Kruskal–Wallis p ‐value = 0.0062. Results are summarized in Figure . Difference Between Predicted Survival and Actual Survival in FPBCC and Other Centers CIBMTR predicted survival and actual survival with intragroup and intergroup p ‐value was presented for FPBCC centers and the other 38 centers (as well as the small and large centers). The FPBCC center that reported data together with the adult data lacked pediatric‐specific predicted survival data, necessitating an estimation based on the survival data from other FPBCC centers. In the years 2016–2018, the five FPBCC centers had mean actual survival that was 4.6% ± 4.5 points lower than their predicted survival, while the nationwide data showed actual survival in alignment with their predicted survivals. In the 2019–2021 period, the actual survival was 2.7% ± 3.1 and 1.4% ± 0.8 higher than predicted in FPBCC centers and all other programs, respectively. The comparison of survival between the two time periods in FPBCC centers showed that survival was statistically significant at Wilcoxon signed rank p = 0.0313. All other centers did not have a significant change in actual and predicted survival between the time periods, as expected when all data are pooled as they define predicted survival. During the years 2016–2018, other small centers had mean actual survival that was 1.4% ± 1.6 points lower than their predicted survival. In the subsequent period from 2019 to 2021, the actual survival rate among these smaller centers exceeded predictions by 2.5% ± 1.2. Meanwhile, during 2016–2018, larger centers outperformed their predicted survival rates by 1.9% ± 1.2. However, this performance saw a slight decline in the 2019–2021 period, with centers experiencing actual survival rates 0.1% ± 0.6 lower than their predicted survival. Notably, the comparison of survival rates between the two time periods for smaller centers yielded a statistically significant difference at Wilcoxon signed rank p = 0.0212. Similarly, larger centers also experienced a statistically significant decline in actual versus predicted survival between the time periods, with a Wilcoxon signed rank p = 0.0024. Number of Transplants per 3‐Year Period FPBCC centers had a 3‐year average of 40.4 ± 8.7 transplants per center over the first reporting period, while all other transplant centers had an average of 77.2 ± 8.4 transplants for the first period. In the second reporting period, FPBCC centers had an average of 35 ± 6.5 transplants, and other centers had 74.5 ± 8.8 transplants. The decline in the number of transplants in FPBCC centers and all other centers did not represent a statistically significant change. When the change in the number of transplants was further evaluated by transplant size, small centers, excluding FPBCC centers, had a decline from 42.6 ± 3.8 to 38.5 ± 3.8 transplants over the two periods. This observed decline in transplant numbers seen in small centers was statistically significant (Wilcoxon signed rank p = 0.0266), while the large transplant centers did not have a significant decline in their numbers (124.8 ± 11.1 to 124.1 ± 12.1, Wilcoxon signed rank p = 0.4691). Discussion The FPBCC was established in 2018, as a response to the identification of lower HSCT survival rates in the different group member institutions in Florida. The 1‐year post‐HSCT survival rate (77.5%), although within an acceptable range, was lower than leading US pediatric centers. We recognized that the lower survival rate would allow for a faster improvement if appropriate interventions were implemented. The FPBCC proposed a set of activities, such as: monthly meetings, sharing of clinical experiences, data sharing, and identifying areas for improvement through retrospective analyses of outcomes. Although these are common sense activities, there were no previous measures of their ability to change survival outcomes. The significantly improved 1‐year post‐HSCT survival rate from 77.5% to 89.5% has been remarkable ( p = 0.0313) and a surprise to participating FPBCC centers. The other small centers' 1‐year post‐HSCT survival improved from 82.4% to 87.9% ( p = 0.0059), and other large centers had a stable 1‐year post‐HSCT survival rate at 85.6%–85.4%. Interestingly, when looking at the actual 1‐year post‐HSCT survival rate compared to predicted survival based on CIBMTR data, the FPBCC centers were − 4.6% +/− 4.5 points lower than predicted in the pre‐FPBCC period. However, the FPBCC centers' survival improved compared to CIBMTR predicted survival data (+2.7% +/−3.1) during the second reporting period 2019–2021, which was statistically significant with a p ‐value of 0.0313. When looking at the FPBCC data compared to other centers, the other smaller centers also had an improved actual survival compared to predicted (−1.4% to 2.5%, p = 0.0212), while other large centers did not have an improved survival rate compared to predicted (1.9% to −0.1%, p = 0.0024). These remarkable improvements in survival rates in Florida for the FPBCC centers over the two periods (2016–2018 and 2019–2021) cannot be solely attributed to the establishment of FPBCC and its associated activities. The improvements are also related to general advancement in the field, a better understanding of donor selection, the use of new drugs, improved supportive care, and treatments, and the increased use of post‐transplant cyclophosphamide. Those improvements were also implemented by other centers, reflecting significant improvement in 1‐year survival in other small transplant programs. Additionally, factors that could have decreased survival, such as COVID‐19 impact and staffing shortages, were likely present for all centers, small and large, across the country. It is believed that the establishment of the FPBCC had an impact on the larger magnitude of improvements. One of the initial focal activities of FPBCC was to better understand outcomes, as demonstrated by prior outcome‐based publications looking at retrospective data (FPBCC prior publications). The ability of the consortium to analyze outcomes data retrospectively for all centers together and identify trends in outcomes helped identify areas for improvement. It also facilitated group discussions on how to implement change. As previously published, our haploidentical donor transplant recipients were doing better than other mismatched donor transplant recipients [Pediatric HSCT in Florida (2014–2016)], providing support to continue with the selection of haploidentical donors. However, we determined that programs needed to update institutional guidelines for donor selection, and we identified that more detailed reporting of cause of death for patients was needed. These findings and subsequent actions helped programs better select donors and understand better the causes of death. Analyzing the trends in survival before and after consortium activities, it appears that the establishment of the FPBCC has had a positive impact on overall survival, which was the initial main objective of the consortium. We also noted that the number of transplants performed in FPBCC centers and other small centers decreased over the two time periods (2016–2018 and 2019–2021). For FPBCC centers, the numbers of transplants done over the first 3‐year reporting period was an average of 40.4 ± 8.7, compared to 35 +/−6.5 in the second reporting period, with no statistical difference noted. While the decline seen in small centers (42.6 vs. 38.5) was statistically significant with a p ‐value of 0.0266, the large centers (124.8 vs. 124.1) did not have a significant decline in transplant numbers. The decline appreciated by FPBCC centers can be attributed to similar reasons that can be attributed to the decline in other small HSCT centers. The reduction of transplant numbers seen in smaller centers could be attributed to the increased use of CAR‐T therapy in patients with relapsed and refractory B‐cell ALL and the potential effect of COVID‐19 in reducing the number of “elective transplants” such as those patients with sickle cell disease or thalassemia. Overall, small transplant programs, including those in the FPBCC, have made more improvements in 1‐year survival outcomes than the large transplant centers during the same period, 2016–2018 and 2019–2021. This is an impactful and meaningful improvement, which demonstrates that complex transplants can be performed in smaller transplant centers just as safely and effectively as in larger transplant centers, without compromising on outcomes. This will allow families to remain closer to their homes, avoiding additional disruptions in their routines and family structure. The sharing of knowledge and experience within the FPBCC over this 3‐year intervention period has allowed its members to match the 1‐year survival figures of other leading transplant centers. Conclusion The FPBCC was established in 2018 with the goals of analyzing transplant outcomes and improving survival. Reflecting on survival pre‐ and post‐FPBCC activities have indicated that the activities designed and conducted by the FPBCC were able to significantly improve its 1‐year survivorship outcomes. The blueprint for the improvement developed by the FPBCC is applicable to other conditions. Our collaborative activities, sharing of knowledge and data, have allowed for significant improvements over a three‐year period among FPBCC centers. This initial approach was geared toward achieving catch‐up improvements. Our next goal will be to develop the disease‐specific expertise of consortium members and to continue to improve survival through clinical trials.
Identification of Multiple QTLs Linked to Neuropathology in the Engrailed-1 Heterozygous Mouse Model of Parkinson’s Disease
ba9ff36b-d023-43ce-b8d0-11056d443310
4994027
Pathology[mh]
Heterozygous disruption of En1 induces loss of dopaminergic neurons in Swiss-OF1 but not C57Bl/6 Previous studies indicate that En1 heterozygosity leads to loss of nigral DNs in SwissOF1 , but not in C57Bl/6 mice . We confirm a 24% loss of DNs in 17 week-old SwissOF1-En1+/− mice compared to wild-type (wt) mean number of cells 7346 and 9642, respectively, p < 0.0001 . To address the effect of the same knock-out model on the C57Bl/6 background, we back-crossed SwissOF1-En1+/− males to C57Bl/6 females. Marker-assisted selection was used according to a speed-congenic approach, and mice from the fourth C57Bl/6 back-cross (N4) had an average of 3% SwissOF1 alleles outside the En1 locus. En1 heterozygosity in N4 mice did not induce degeneration of DNs in the SNpc: mean 6512 in N4-En1+/− and 7343 in N4-En1+/+ . The mean number of DNs in C57Bl/6 wt (6890) and N4 mice was in accordance with previous reports for C57Bl/6 , but as much as 29% lower than in SwissOF1 wt mice (9642, p < 0.0001, ). These data thus confirm loss of nigral DNs in En1+/− SwissOF1 mice, and show that the C57Bl/6 strain background confers protection to loss of nigral DNs after heterozygous deletion of En1. QTLs linked to loss of dopaminergic neurons in SNpc SwissOF1-En1+/− males were intercrossed with C57Bl/6 females to generate F1 and F2 populations with En1+/+ and En1+/− genotypes . The F2-En1+/+ and F2-En1+/− populations had a similar mean number of DNs . The variance in the F2-En1+/+ group represents the segregation of alleles regulating the number of DNs in the respective wt parental strains, while the variance in the F2-En1+/− group also represents QTLs involved in the response to En1 heterozygosity. The mean number of DNs in F2-En1+/+ mice is similar to C57Bl/6 wt, but significantly lower than SwissOF1 wt. Similarly, the F2-En1+/− mean number of DNs is close to that of C57Bl/6-N4-En1+/−, but significantly lower than that of SwissOF1-En1+/− animals . To identify loci linked to DNs susceptibility to degeneration in the absence of one En1 allele, the number of DNs in SNpc of F2-En1+/− mice at 17 weeks was used in genome-wide linkage analysis employing R-QTL . Out of 377 genotyped SNPs, in the Illumina Mouse LD Linkage Panel, 114 were informative . The phenotypic spread suggested a complex genetic regulation of the trait. Single QTL analysis, which assumes a single QTL for the phenotype, revealed no significant peaks across the genome . We proceeded with a multiple QTL model and identified eight QTLs (En1a-h) linked to the number of nigral DNs . The full multiple QTL model included interactions between seven of the loci and explained 74% of the phenotypic variance (logarithm of odds (LOD) 28, p = 2.4E-9). Mice carrying two C57Bl/6 alleles in the most significant QTL, En1a, had an average of 6727 DNs compared to 6010 in heterozygous and 5816 in SwissOF1 homozygous mice . En1d showed a similar pattern, while En1g showed the opposite genotype effect . Notably, none of the identified QTLs showed any significant effect in the F2-En1+/+ cohort in QTL models or in single-marker analyses. Pairwise interaction analyses are performed on groups defined by the genotype at two loci, with nine combinations for each pair. The most significant interaction pair was En1b:En1d (F = 5.8E-06). Among all genotype combinations, F2-En1+/− mice carrying C57Bl/6 alleles at both En1b and En1d display the lowest number of remaining DNs in SNpc, while those homozygous for C57Bl/6 alleles at En1d but heterozygous or SwissOF1 homozygous at En1b display the highest number of DNs . In the En1a:En1c interaction, there is a higher number of DNs in animals homozygous for C57Bl/6 only in combination with SwissOF1 alleles at En1c . Thus, no single QTL was linked to the number of DNs after heterozygous En1 disruption. Instead, SwissOF1 and C57Bl/6 alleles in eight distinct and interacting QTLs regulate the phenotype. In addition, these QTLs are specific to the susceptibility to En1 disruption and not related to differences in DN numbers between the wt strains. The effect of En1 heterozygozity on axonal defects in dopaminergic neurons Axonal swellings were seen in the dorso-lateral portion of striatum in all 17-week old SwissOF1-En1+/− mice , but not in SwissOF1-wt or C57Bl/6-wt mice (data not shown). Swellings were also observed in most N4-En1+/− (7/8) and F2-En1+/− (105/120) mice. F2-En1+/− mice had, when comparing mean values, fewer but larger swellings compared to SwissOF1-En1+/− and N4-En1+/− mice . Correlation between axonal swellings and loss of DNs In F2-En1+/− mice displaying axonal swellings, there was a positive correlation between the number of axonal swellings and the average size of the swellings (r = 0.62, p < 0.0001, ). Axonal swelling size was not correlated with the number of remaining DNs in the SNpc . We therefore consider axonal swelling size a possibly distinct phenotype for neuropathology in the En1 heterozygous mouse model. In F2-En1+/− mice, the number of axonal swellings did not correlate to DN number . This can be explained by the fact that axons with swellings have been lost along with the respective soma of degenerated DNs, giving a low number of remaining axonal swellings in mice with severe neurodegeneration as well as in mice with little DN pathology. However, the more remaining DNs in the SNpc by 17 weeks of age, the less axonal swellings per neuron were seen (r = −0.37, p < 0.001, ). Thus, at the single timepoint studied here, axonal swellings are correlated with the number of DNs when taking previous loss of DNs into account. QTLs linked to the load of axonal swellings in the striatum Since axonal swellings appear in nigrostriatal neurons of En1+/− mice prior to degeneration of the DN soma, we used load of axonal swellings, i.e. the relative number of axonal swellings divided by nigral DNs number estimates at 17 weeks, as phenotype for linkage analysis. Single QTL analysis for the load of axonal swellings per remaining nigral DNs yielded one significant peak, En1i, located on chromosome 15 . Interestingly, mice homozygous for SwissOF1 alleles at En1i displayed the lowest load of axonal swellings. C57Bl/6 alleles at this locus are thus linked to the presence of more axonal swellings on remaining DNs . The phenotypic spread in F2-En1+/− indicated the presence of additional QTLs, and multiple QTL analysis confirmed En1i and identified another six QTLs . Among these, En1j and En1k displayed the strongest effect. Opposite to the effect seen at En1i, SwissOF1 alleles in En1j and En1k were linked to the presence of more axonal swellings on remaining DNs The full model includes interactions between five out of the seven loci, with En1i interacting with En1j, En1k and En1l . Comparing mice homozygous for C57Bl/6 and SwissOF1 alleles at En1i, two C57Bl/6 alleles at En1i lead to more swellings per remaining TH+ neuron regardless of the genotype of En1j and En1k . However, the phenotype in mice heterozygous at En1i depends on En1j and En1k, where mice homozygous for SwissOF1 alleles at En1j and En1k display the most swellings . The full model including seven QTLs and five interactions explained 80% of the phenotypic variance in the load of axonal swellings per remaining DN (LOD =  32, p = 1.7E-12, ). The load of axonal swellings is thus linked to one QTL, En1i, in a single model, and to seven, partly interacting, QTLs in a multiple model that explains the vast majority of the phenotypic variance of the trait. QTLs linked to the size of axonal swellings in the striatum The group mean and variation in average size of axonal swellings was similar in SwissOF1-En1+/− and C57Bl/6, while both the average size and variation was greater in the F2-En1+/− population . Single QTL analysis did not identify any significant locus linked to the size of DN axonal swellings in the striata of F2-En1+/− mice , but the multiple QTL scan identified eight QTLs and interactions between six of these . The full multiple QTL model including interactions explained 74% of the phenotypic variance in the size of axonal swellings (LOD = 30, p = 7.0E-11, ). Effectplots show that homozygosity for C57Bl/6 alleles at En1p and En1s is linked to smaller swelling size . Moderate effects were seen from the other QTLs , likely due to the dependency of interacting QTLs as well as the relatively narrow range of the phenotype. En1p interacts with En1q, En1r and En1s, and C57Bl/6 homozygosity at En1p is linked to smaller swelling size in combination with SwissOF1 alleles at En1q , heterozygozity at En1r and C57Bl/6 homozygozity at En1s . Overlapping QTLs Overlapping QTLs for the two axonal swelling phenotypes were found on chromosome (chr) 4 (En1j and En1v), and QTLs with estimated positions in close proximity to each other were found on chr 6 (En1k and En1p) and chr15 (En1i and En1q). The direction of effect were similar in each pair, i.e. SwissOF1 alleles was linked to more swellings and larger swelling size for En1j-En1v and En1k-En1p, and fewer swellings and smaller size for En1i-En1q. Thus, the same QTL linked to both average size of and number of axonal swellings per remaining DN may underlie the respective two QTLs on chr 4, 6 and 15. Other possibilities for shared QTLs are En1h and En1n on chr 18 ( and ), where heterozygozity in the locus resulted in higher number of DNs and smaller size of axonal swellings, and En1d, En1e and En1s on chr 2, where mice homozygous for C57Bl/6 alleles display higher DN counts and lower number of axonal swellings ( and ). The neuroprotective effect of En1c on chr 14 with heterozygous mice displaying the highest average of DNs was, however, not reflected in the nearby locus En1b, suggesting at least two separate effects of these loci . Previous studies indicate that En1 heterozygosity leads to loss of nigral DNs in SwissOF1 , but not in C57Bl/6 mice . We confirm a 24% loss of DNs in 17 week-old SwissOF1-En1+/− mice compared to wild-type (wt) mean number of cells 7346 and 9642, respectively, p < 0.0001 . To address the effect of the same knock-out model on the C57Bl/6 background, we back-crossed SwissOF1-En1+/− males to C57Bl/6 females. Marker-assisted selection was used according to a speed-congenic approach, and mice from the fourth C57Bl/6 back-cross (N4) had an average of 3% SwissOF1 alleles outside the En1 locus. En1 heterozygosity in N4 mice did not induce degeneration of DNs in the SNpc: mean 6512 in N4-En1+/− and 7343 in N4-En1+/+ . The mean number of DNs in C57Bl/6 wt (6890) and N4 mice was in accordance with previous reports for C57Bl/6 , but as much as 29% lower than in SwissOF1 wt mice (9642, p < 0.0001, ). These data thus confirm loss of nigral DNs in En1+/− SwissOF1 mice, and show that the C57Bl/6 strain background confers protection to loss of nigral DNs after heterozygous deletion of En1. SwissOF1-En1+/− males were intercrossed with C57Bl/6 females to generate F1 and F2 populations with En1+/+ and En1+/− genotypes . The F2-En1+/+ and F2-En1+/− populations had a similar mean number of DNs . The variance in the F2-En1+/+ group represents the segregation of alleles regulating the number of DNs in the respective wt parental strains, while the variance in the F2-En1+/− group also represents QTLs involved in the response to En1 heterozygosity. The mean number of DNs in F2-En1+/+ mice is similar to C57Bl/6 wt, but significantly lower than SwissOF1 wt. Similarly, the F2-En1+/− mean number of DNs is close to that of C57Bl/6-N4-En1+/−, but significantly lower than that of SwissOF1-En1+/− animals . To identify loci linked to DNs susceptibility to degeneration in the absence of one En1 allele, the number of DNs in SNpc of F2-En1+/− mice at 17 weeks was used in genome-wide linkage analysis employing R-QTL . Out of 377 genotyped SNPs, in the Illumina Mouse LD Linkage Panel, 114 were informative . The phenotypic spread suggested a complex genetic regulation of the trait. Single QTL analysis, which assumes a single QTL for the phenotype, revealed no significant peaks across the genome . We proceeded with a multiple QTL model and identified eight QTLs (En1a-h) linked to the number of nigral DNs . The full multiple QTL model included interactions between seven of the loci and explained 74% of the phenotypic variance (logarithm of odds (LOD) 28, p = 2.4E-9). Mice carrying two C57Bl/6 alleles in the most significant QTL, En1a, had an average of 6727 DNs compared to 6010 in heterozygous and 5816 in SwissOF1 homozygous mice . En1d showed a similar pattern, while En1g showed the opposite genotype effect . Notably, none of the identified QTLs showed any significant effect in the F2-En1+/+ cohort in QTL models or in single-marker analyses. Pairwise interaction analyses are performed on groups defined by the genotype at two loci, with nine combinations for each pair. The most significant interaction pair was En1b:En1d (F = 5.8E-06). Among all genotype combinations, F2-En1+/− mice carrying C57Bl/6 alleles at both En1b and En1d display the lowest number of remaining DNs in SNpc, while those homozygous for C57Bl/6 alleles at En1d but heterozygous or SwissOF1 homozygous at En1b display the highest number of DNs . In the En1a:En1c interaction, there is a higher number of DNs in animals homozygous for C57Bl/6 only in combination with SwissOF1 alleles at En1c . Thus, no single QTL was linked to the number of DNs after heterozygous En1 disruption. Instead, SwissOF1 and C57Bl/6 alleles in eight distinct and interacting QTLs regulate the phenotype. In addition, these QTLs are specific to the susceptibility to En1 disruption and not related to differences in DN numbers between the wt strains. Axonal swellings were seen in the dorso-lateral portion of striatum in all 17-week old SwissOF1-En1+/− mice , but not in SwissOF1-wt or C57Bl/6-wt mice (data not shown). Swellings were also observed in most N4-En1+/− (7/8) and F2-En1+/− (105/120) mice. F2-En1+/− mice had, when comparing mean values, fewer but larger swellings compared to SwissOF1-En1+/− and N4-En1+/− mice . In F2-En1+/− mice displaying axonal swellings, there was a positive correlation between the number of axonal swellings and the average size of the swellings (r = 0.62, p < 0.0001, ). Axonal swelling size was not correlated with the number of remaining DNs in the SNpc . We therefore consider axonal swelling size a possibly distinct phenotype for neuropathology in the En1 heterozygous mouse model. In F2-En1+/− mice, the number of axonal swellings did not correlate to DN number . This can be explained by the fact that axons with swellings have been lost along with the respective soma of degenerated DNs, giving a low number of remaining axonal swellings in mice with severe neurodegeneration as well as in mice with little DN pathology. However, the more remaining DNs in the SNpc by 17 weeks of age, the less axonal swellings per neuron were seen (r = −0.37, p < 0.001, ). Thus, at the single timepoint studied here, axonal swellings are correlated with the number of DNs when taking previous loss of DNs into account. Since axonal swellings appear in nigrostriatal neurons of En1+/− mice prior to degeneration of the DN soma, we used load of axonal swellings, i.e. the relative number of axonal swellings divided by nigral DNs number estimates at 17 weeks, as phenotype for linkage analysis. Single QTL analysis for the load of axonal swellings per remaining nigral DNs yielded one significant peak, En1i, located on chromosome 15 . Interestingly, mice homozygous for SwissOF1 alleles at En1i displayed the lowest load of axonal swellings. C57Bl/6 alleles at this locus are thus linked to the presence of more axonal swellings on remaining DNs . The phenotypic spread in F2-En1+/− indicated the presence of additional QTLs, and multiple QTL analysis confirmed En1i and identified another six QTLs . Among these, En1j and En1k displayed the strongest effect. Opposite to the effect seen at En1i, SwissOF1 alleles in En1j and En1k were linked to the presence of more axonal swellings on remaining DNs The full model includes interactions between five out of the seven loci, with En1i interacting with En1j, En1k and En1l . Comparing mice homozygous for C57Bl/6 and SwissOF1 alleles at En1i, two C57Bl/6 alleles at En1i lead to more swellings per remaining TH+ neuron regardless of the genotype of En1j and En1k . However, the phenotype in mice heterozygous at En1i depends on En1j and En1k, where mice homozygous for SwissOF1 alleles at En1j and En1k display the most swellings . The full model including seven QTLs and five interactions explained 80% of the phenotypic variance in the load of axonal swellings per remaining DN (LOD =  32, p = 1.7E-12, ). The load of axonal swellings is thus linked to one QTL, En1i, in a single model, and to seven, partly interacting, QTLs in a multiple model that explains the vast majority of the phenotypic variance of the trait. The group mean and variation in average size of axonal swellings was similar in SwissOF1-En1+/− and C57Bl/6, while both the average size and variation was greater in the F2-En1+/− population . Single QTL analysis did not identify any significant locus linked to the size of DN axonal swellings in the striata of F2-En1+/− mice , but the multiple QTL scan identified eight QTLs and interactions between six of these . The full multiple QTL model including interactions explained 74% of the phenotypic variance in the size of axonal swellings (LOD = 30, p = 7.0E-11, ). Effectplots show that homozygosity for C57Bl/6 alleles at En1p and En1s is linked to smaller swelling size . Moderate effects were seen from the other QTLs , likely due to the dependency of interacting QTLs as well as the relatively narrow range of the phenotype. En1p interacts with En1q, En1r and En1s, and C57Bl/6 homozygosity at En1p is linked to smaller swelling size in combination with SwissOF1 alleles at En1q , heterozygozity at En1r and C57Bl/6 homozygozity at En1s . Overlapping QTLs for the two axonal swelling phenotypes were found on chromosome (chr) 4 (En1j and En1v), and QTLs with estimated positions in close proximity to each other were found on chr 6 (En1k and En1p) and chr15 (En1i and En1q). The direction of effect were similar in each pair, i.e. SwissOF1 alleles was linked to more swellings and larger swelling size for En1j-En1v and En1k-En1p, and fewer swellings and smaller size for En1i-En1q. Thus, the same QTL linked to both average size of and number of axonal swellings per remaining DN may underlie the respective two QTLs on chr 4, 6 and 15. Other possibilities for shared QTLs are En1h and En1n on chr 18 ( and ), where heterozygozity in the locus resulted in higher number of DNs and smaller size of axonal swellings, and En1d, En1e and En1s on chr 2, where mice homozygous for C57Bl/6 alleles display higher DN counts and lower number of axonal swellings ( and ). The neuroprotective effect of En1c on chr 14 with heterozygous mice displaying the highest average of DNs was, however, not reflected in the nearby locus En1b, suggesting at least two separate effects of these loci . Although susceptibility to neurodegeneration is a complex genetic trait known to be strain-dependent in mice, this study is the first to genetically map loci regulating PD-like neuropathology in a spontaneous PD mouse model. We identify QTLs that explain the vast majority of the variation in DN degeneration and axonal pathology by linkage analysis in an F2 intercross with disruption of one En1 allele (En1 hemizygous). The F2 intercross was obtained from SwissOF1 mice, that display PD-like pathology with preferential loss of DNs in SNpc when En1 hemizygous, and C57Bl/6 mice that do not display DN degeneration when En1 hemizygous. We chose to map three distinct features of PD-like pathology: loss of DNs in SNpc, load of swellings on DN axons, and size of swellings on DN axons. While analyses assuming the presence of a single QTL per phenotype only revealed a significant QTL for the load of axonal swellings (En1i), multiple QTL analyses revealed several loci with high LOD scores and interactions for all three phenotypes. A large number of QTLs and interactions between loci are typical for complex traits. Knowing that the etiology of 90% of PD cases is complex with multiple interacting genetic and environmental risk factors, the presented QTLs linked to PD-like pathology in the En1 mouse model are particularly relevant to idiopathic PD. We replicated loss of about 20% of DNs in SNpc in four month-old SwissOF1 mice lacking one En1 allele . The total number of DNs estimated in our SwissOF1-En1+/− cohort was similar to that reported by Sonnier et al . but around twice the number reported by Norsdström and colleagues. The reasons behind this discrepancy are likely related to parameters used in stereological cell counts, such as delineation of region of interest , use of guard zones, thickness of sections, TH detection methodology, or differences between colonies of these mice. Interestingly, the number of DNs in the SNpc of C57Bl/6 mice (C57Bl/6-wt and C57Bl/6-N4-En1+/+) was significantly lower compared to SwissOF1 wt. These differences in DN numbers between adult mice of different inbred strains could be attributed to a different rate of early developmental neurogenesis/apoptosis or age-related cell death in the mesencephalon in the two strains. Like Sgado et al ., we saw no loss of DNs in En1-heterozygous N4 mice having a large majority of C57Bl/6 alleles in their genetic background . Thus, despite a lower starting number of DNs, mice with C57Bl/6 genetic background were more resistant to En1 depletion. We report a similar average and variation in number of DNs in F2 mice with and without partial loss of En1, and the numbers are lower than those in each parental group. Considering that En1 is part of a complex network of transcription factors that orchestrate the development and survival of DNs , it is not surprising with a large variation in the number of DNs between genetically heterogenous F2 mice with only one En1 allele. Other important factors contributing to the phenotypic variation in the F2 population are differences in DN numbers between the wt parental strains SwissOF1 and C57Bl/6, and transgressive segregation, which may cause extreme phenotypes in hybrid populations . The reported QTLs linked to DN number likely represent both innate strain differences and strain-specific responses to En1 hemizygosity, but single-marker analysis for QTLs identified in F2+/− did not show any significant effects in the relatively small F2-En1+/+ cohort. The multiple QTL models described here all had LOD scores that far exceeded the significance thresholds given by permutation tests. Some QTLs in the models had LOD scores below the significance threshold for individual QTLs. However, this threshold was estimated from the distribution of the max LOD score, rather than all LOD scores, in each permutation. In addition to contributing to the strength of the full model, some of the QTLs below the threshold were overlapping with significant QTLs for other phenotypes. When QTLs for different phenotypes in a model overlap, they may represent shared or neighbouring alleles that regulate mechanisms of key importance to the model, in this case DN integrity and survival. We found indications of such shared QTLs on chr 2 (En1e, En1s), chr 4 (En1j, En1v), chr 6 (En1k and En1p), chr15 (En1i and En1q) and chr 18 (En1h and En1n). Thus, the number of QTLs reported here likely are over-estimated, and the loci with impact on both axonal pathology and DN survival may regulate key features of DN integrity and function and are highly interesting candidate regions for fine-mapping. A limitation to this study is the one-directional cross, preventing analyses of founder effects. Some of the identified QTLs may thus be specific to a SwissOF1-En1+/− male to C57Bl/6 female, and not a reciprocal cross. En1 polymorphisms in humans have previously been suggested to be associated with PD risk . These were relatively small studies and they would need to be replicated in larger cohorts to be conclusive. However, mouse models have demonstrated the relevance of En1 to PD. SwissOF1 mice with En1 disruption display, among other PD-like phenotypes, progressive degeneration of DNs . In addition to the loss of midbrain DNs, these mice exhibit several neuropathological features that are analogous to those seen in the brains of PD patients. For example, the SNpc neurons are affected earlier and to a greater extent than the adjacent VTA DNs. The loss of En1 function is associated with mitochondrial deficits, akin to what is observed in PD. Moreover, multiple changes in the mTOR pathway, which controls e.g. autophagy, appear in En1 mice. Observations of ultrastructural changes in axonal swellings that are concurrent with accumulation of autophagic vacuoles in the nigrostriatal pathway and similar phenotypes described in Lmx1a/b deficient mice suggest that DNs undergo a dying back process with several features resembling what has been proposed to occur in PD . Furthermore, before DNs degenerate in the En1+/− model, their capacity to release and take up dopamine is dramatically impaired in parts of the striatum , similar to what is suggested to occur in PD. Moreover, there is evidence for marked neuroinflammation in the SNpc of En1+/− mice (Ghosh et al . in preparation), similarly to what is seen in PD. Finally, the En1 protein has recently been reported to protect against mitochondrial insult and oxidative stress in DNs, which is of particular interest considering that oxidative stress has been strongly implicated as a pathogenic mechanism in PD . Aside from SwissOF1, the single allele knock-out of En1 has been shown to cause neurodegeneration in other inbred mouse strains but no degeneration in mice with C57Bl/6 background . A possible compensatory mechanism could act through En2, which has been shown to be able to compensate for En1 . Based on our linkage analysis, however, we conclude that there is no cis-acting effect of C57Bl/6 alleles in the En2 locus, since none of the QTLs overlap the position of the En2 gene. There might, however, be trans-acting effects from distal QTLs that regulate En2 gene expression, transcript stability, translation, or protein activity. It should also be emphasized that possible QTLs in close proximity to the En1 gene on chromosome 1 could not be assessed in the F2 cohorts studied. This is due to transgene selection at the En1 locus, leading to alleles from the original transgene surrounding the En1 gene and a distorted genotype distribution (no C57Bl/6 homozygozity) in the transgenic region. Therefore, we cannot rule out the possibility that C57Bl/6 alleles close to the En1 knock-out transgene impacted the phenotypes studied here. While the loss of DNs in SNpc is the classical hallmark of PD, recent research suggests that axonal changes in the nigrostriatal neurons precede cell loss. It has been proposed that Wallerian-like axonal degeneration is a common feature of various neurodegenerative disorders . In a study on PD human brains, the levels of TH and dopamine transporter (DAT) in axons were vastly depleted in the putamen, and were virtually gone within the next 4 years of diagnosis (i.e. following onset of motor symptoms), whereas the loss of cells in the SNpc progressed most rapidly during the decade following PD recognition . In addition, dystrophic axonal spheroids with accumulated beta- and gamma-synuclein have been found in the hippocampus of PD patients . These post-mortem studies support the notion that axonal failure is a significant prodromal hallmark of PD, and these are also reflected in animal PD models . En1 heterozygous mice with the SwissOF1 background display abnormal TH-positive axonal swellings as early as 8 days after birth. These increase in number and size over the following weeks, and exhibit accumulations of mitochondria and electron-dense vacuoles, suggesting dysfunction in axonal transport, autophagy or faulty synapse maintenance . No axonal abnormalities were reported in previous studies on En1 heterozygous mice with C57Bl/6 genetic background , even when mice were studied until old age . In the present study, we did not observe cell loss in 17-week old C57Bl/6-N4-En1+/− mice, but nonetheless striatal axonal swellings were as abundant as in SwissOF1-En1+/− mice of the same age. This could be interpreted as a sign of delayed nigrostriatal degeneration in En1 heterozygous C57Bl/6 mice compared to SwissOF1, and it cannot be excluded that older N4-En1+/− mice would exhibit nigral cell death. In the F2-En1+/− population analyzed here, we see a large inter-individual variation in both the neurodegenerative phenotype and in the load of axonal swellings. By correlating these phenotypes, we conclude that the more DN somas remain in the SNpc, the fewer axonal swellings the DNs have, suggesting that surviving DNs could be protected from both soma and axon degeneration. Alternatively, the population of degenerating vs. surviving DNs may belong to different subtypes . Linkage analysis did, however, identify both unique and overlapping QTLs for these two traits arguing for them being biologically related pathological processes. To better understand the nature, causes and consequences of axonal swellings appearance, further investigation involving e.g. neuroprotective agents is needed. Although finer mapping is necessary to pinpoint the specific genes underlying the QTLs, there are a couple of candidates, in or in close proximity to the identified loci, which have previously been indicated in PD pathogenesis. Recently, structures almost identical to the axonal swellings in En1+/− mice: TH-stained “abnormally large profiles” and “enlarged presynaptic boutons,” were observed in the striata of adult Lmx1a/b conditionally-depleted mice, and they were accompanied by loss of dopamine in the striatum, loss of DNs in SNpc and VTA and other changes resembling both PD and the phenotype of SwissOF1-En1+/− mice . Since Lmx1b lies close to En1d identified in this study, it is a strong candidate for harboring allelic differences affecting susceptibility to En1-heterozygosity. Another QTL on chromosome two, En1o for number of axonal swellings per DN is close to the Foxa2 gene. Mice carrying only one copy of the Foxa2 gene show abnormalities in motor behavior in old age and an associated progressive loss of dopamine on the Swiss background . Otx2, located on chromosome 14, within En1x and in proximity of En1b and En1c, is another candidate that could be influential in our experimental paradigm. It is a homeobox transcription factor that is expressed in nigral DNs only during development. Mild over-expression of Otx2 in SNpc progenitors and neurons was sufficient to rescue En1 haploinsufficiency-dependent defects, such as progressive loss of SNpc neurons . A hypothesis linking defective autophagy to axonal swellings is supported by the fact that the gene encoding Atg7, a necessary component of the autophagy process, is located close to En1k on chromosome 6. Studies have shown that conditional deletion of Atg7 in nigral neurons leads to age-dependent loss of DNs and corresponding loss of striatal dopamine . En1k is linked to the number of axonal swellings per remaining neuron and F2-En1+/− carriers of C57Bl/6 alleles at this locus display almost half as many swellings per DN as homozygous Swiss-allele carriers . This is an indication of a more efficient autophagy process in the C57Bl/6 compared to Swiss strain, possibly underlying part of the protection to En1-induced pathology in DNs. Due to the complex genetic structure of idiopathic PD, we emphasize the importance of understanding the genetic regulation of dysfunction and degeneration of DNs in order to better understand disease etiology and to develop new therapies. The present mapping of QTLs linked to DN loss and axonal pathology identifies multiple interacting QTLs linked to these phenotypes and offers the possibility for genetic fine-mapping. Potential candidate genes from our study are of prime interest to validate by QTL fine-mapping, functional studies in culture and in vivo and by gene targeting. These genes will provide new clues to biologically relevant mechanisms of PD, which is needed to identify new neuroprotective strategies to increase survival and function of DNs and alleviate patient’s symptoms. Animals and breeding schemes All procedures described were approved by the Ethical Committee for the use of laboratory animals in the Lund/Malmö region and were conducted in accordance with the relevant guidelines and regulations. A schematic drawing of the breeding strategy is shown in . The En1+/− strain was generated as described earlier and bred on the SwissOF1 background (Charles River). To generate an F2 population, C57BL/6NCrl (C57Bl/6) males (Charles River) were crossed with SwissOF1-En1+/−females. From the F1 generation, En1+/+ males were crossed with En1+/− females to produce the F2 generation. In total, 129 F2-En1+/− and 57 F2-En1+/+ males were sacrificed at 17 weeks of age. In order to study C57Bl/6 mice with a single allele knockout of En1, the disrupted En1 locus was transferred from SwissOF1-En1+/− to the C57Bl/6 background with a speed congenic approach consisting of repeated backcrossing to C57Bl/6 females (Charles River) with marker-assisted selection . The backcross started with an F2-En1+/− male. In each generation, En1+/− male mice were subjected to single nucleotide polymorphism (SNP) analysis (Illumina Golden Gate assay ) to estimate the fraction of C57Bl/6 background in the genome. The En1+/− male with the highest number of C57Bl/6-alleles was kept for back-crossing with C57Bl/6 females to produce the next generation. The phenotyped C57Bl/6-N4 generation had an average of <3% SwissOF1 alleles outside the En1 locus. DNA isolation Ear punches, tail tips or brain tissue were incubated in 0.5 mL lysis buffer (trizma base (1 M, pH 8.5), edetic acid (0.5 M), sodium dodecyl sulfate (10%), sodium chloride (5 M) and Milli-Q water (Millipore Corporation) and 2.5 μL Protease K (20 mg/ml) at 55 °C while shaking at 600 rpm for either 1 h for ear or 2 h for tail and brain biopsies. Once the tissue was lysed, it was centrifuged at 14,000 rpm at 4 °C for 10 min. The supernatant was transferred to 0.5 mL ice-cold isopropanol, and the DNA was precipitated by gently shaking the tubes. At this point, the samples were centrifuged again at 14000 rpm at 4 °C for 10 min, then all liquid was removed and tubes were put to dry in a ventilated hood at room temperature (RT) for at least 1 h. Once dry, 100 μL Milli-Q water was added, and the samples could then be incubated at 37 °C overnight to dissolve the pellet. The DNA was subsequently used for both genotyping of En1 and genome-wide SNP genotyping. Genotyping En1 knockout To identify the single allele knockout of En1, we performed PCR with primers for LacZ, the gene used for deleting En1 allele. The PCR was performed by mixing 2 μL DreamTaq Green Buffer (ThermoScientific), 0.5 μL dNTP, 13.3 μL Milli-Q water, 1 μL LacZ Forward primer (5′-TGT ATG AAC GGT CTG GTC TTT G-3′, 10 μM), 1 μL LacZ Reverse primer (5′-AAC AGG TAT TCG CTG GTC ACT T-3′, 10 μM), 0.2 μL Taq polymerase (# EP0702, Thermo Scientific) and 2 μL genomic DNA (10 ng–1 μg). The PCR product was 128 bp long and was identified by gel-electrophoresis on a 2% Agarose gel. We also applied the SsoAdvanced tm SYBR ® Green Supermix (Bio-Rad) for genotyping of LacZ. By performing qPCR with melting curve assay at the end, we could assess the presence of a product. The qPCR was performed by mixing 10 μL of SsoAdvanced tm SYBR® Green Supermix, 0.6 μL of LacZ forward and 0.6 μL LacZ reverse primers, 50 ng–5 pg DNA, and Milli-Q water for a total volume of 20 μL. After perfusion and brain harvesting (as described below), each cerebellum piece was incubated while shaking in dark conditions over-night at RT in 2 μL MgCl 2 , 25 μL X-gal (40 mg/μL), 10 μL K 3 Fe(CN) 6 (0.5 M), 10 μL K 4 Fe(CN) 6 (0.5 M) and 0.955 mL PBS-T (0.3%). Perfusion and brain dissection At 17 weeks of age (+/− 3 days) mice were sedated by intraperitoneal injection of 0.2 mL sodium pentobarbital (40 mg/μL), before being perfused through the ascending aorta with ice-cold saline (0.9% NaCl) for 3 minutes. After isolating the brain, the cerebellum was sliced off and post-fixed in PFA (4%, pH 7.4) for 20 minutes and then transferred to saline for subsequent LacZ staining. The remaining brain was placed in a mouse brain slice matrix and cut sagitally down the midline with a fine razor blade. The left hemisphere was immediately placed in 10–15 mL PFA (4%, pH 7.4) and post-fixed overnight and subsequently cryoprotected in 30% sucrose (in PBS, with 0.01% sodium azide). Immunohistochemistry The left hemispheres of the dissected brains were sectioned coronally on a freezing microtome (Leica SM2010R) at 40 μm. Immunohistochemical stainings were performed on free-floating sections. The SNpc sections, were given an initial antigen-retrieval incubation in Tris/EDTA (pH 9.0) at 80 °C for 45 min. All the sections were quenched with 3% H 2 O 2 /10% MetOH for 30 min then blocked with 5% Normal Horse Serum (NHS) for SNpc and Normal Goat Serum for Striatum before overnight RT incubation with primary antibody (mouse anti-TH 1:10000 for SNpc, Immunostar, Wisconsin for SNpc; rabbit anti-TH 1:4000, Millipore, California for Striatum). On the second day, sections were incubated with the corresponding biotinylated secondary antibody (horse anti-mouse 1:200; goat anti-mouse 1:200, Vector Laboratories) for 1 h at RT. This was followed by a 30-min incubation with an avidin-biotin peroxidase solution (ABC Elite, Vector Laboratories), and the antigen was visualized using 3,3-diaminobenzidine (DAB) as a chromogen. Sections were mounted on glass slides, dehydrated with increasing concentrations of ethanol and pure xylene, and finally coverslipped using DPX mounting medium (Sigma-Aldrich, Gillingham). Sections with uneven, blurry, not penetrating staining were excluded from analyses. Stereological estimates of nigral neurons Stereology was performed according to the optical fractionator principle in order to quantify the total number of tyrosine hydroxylase-positive DNs in the SNpc. We used Leica microscope connected to digital camera (Leica MPS52) employing Stereo Investigator software (MBF Bioscience). Every third section (section sampling fraction, ssf = 3) of the midbrain region (Bregma −2.70 to −3.78 ) was analyzed , which yielded between 9–11 sections per animal. Tracing regions of interest (ROIs) was done using the 5X/0.11 lens, and counting was performed with 100X/1.30 lens. The average mounted section thickness (h) was 23.6 μm (+/−2.6) and no guard zones were used due to variable in section thickness (thickness sampling fraction, tsf = 1). Section thickness was measured at every fourth site while counting and the area-sampling fraction (asf) was on average 0.0883. Dissector volume (h*A frame ) was 60,500 mm 3 on average, and the average number of DNs counted in each individual was 234 (+/−48). A maximal Gundersen coefficient of error (CE) of 0.08 was accepted, and the smoothness factor (m) of 1 was used. The following morphological criteria had to be fulfilled in order for the cell to be included: the cell body was highly defined with a visible nucleus due to fewer TH particles available; a darker stained cell may have not meet this criteria, but its projections were distinctly visible making it clear that the stained particle is a cell . 29 of 129 F2-En1+/− and 26 of 57 F2-En1+/+ were excluded from the analysis due to complications with tissue processing, leaving 100 F2-En1+/−and 31 F2-En1+/+ for quantification. A genotype-blind operator performed the stereological assessment. To test for normal distribution of cell numbers among F2 generation, the Shapiro-Wilk normality test in R (3.0.2) was used. Axonal swelling quantification Three to five consecutive sections from each animal at bregma distance 0.72–0.92 mm were stained and analyzed. High-resolution 25x pictures were taken using the same microscope, camera and software as for the stereology. Four pictures were taken of precisely delineated ROIs spanning an area of about 3.6 mm 2 representing the dorso-lateral part of the caudate putamen of the striatum, the main functional regions of the nigrostriatal pathway , for every section . Image J was used to identify the swellings and calculate their total number, as well as their size, by setting an exclusion threshold for particles <3 μm 2 . The average number and size of swellings were calculated based on all pictures from one animal. Nine of F2-En1+/− samples were excluded due to low quality of staining. A genotype-blind operator performed image acquisition and processing. Genome-wide SNP assay The SNP&SEQ technology platform at Uppsala University performed the SNP genotyping. Illumina Mouse Low Density (LD) Linkage Panel, containing 377 SNPs, was selected from the Wellcome-CTC Mouse Strain SNP Genotype Set ; it was used with the Golden Gate Genotyping Assay protocol . To find parental-specific alleles, the genomes of C57Bl/6 and SwissOF1 founders were genotyped. For an SNP to be parental-specific, it should vary between C57Bl/6 and SwissOF1, but not within the population of outbred SwissOF1 used for intercrossing. Following these criteria, 126 of the 377 SNPs were parental-specific. Among the F2-En1+/− mice, one individual had missing genotype data at all markers and was removed from the analysis. No other individuals lacked a notable number of markers and none of the markers had missing genotypes for a notable number of individuals . Due to the selection of F2-En1+/− individuals,markers adjacent to the En1 locus (chr. 1, 77–162 Mb) showed distorted segregation patterns, with significant deviation from the expected 1:2:1 distribution, and markers with p-value (<1E-5) were removed from the analysis . No other markers had a significantly distorted allele segregation pattern according to chi-square test for Mendelian segregation . We thus have a good genomic coverage for linkage analysis, but the model with heterozygous transgene selection prevents detection of potential QTLs near the En1 locus. Single-QTL analysis The data were analyzed using R/qtl (v1.34–17) to identify gene regions linked to neurodegeneration. Scanone was used for single QTL analysis. For expectation-maximization (EM) and Haley-Knott methods, the genotype probabilities were calculated with 0.5 cM distance and a genotyping error rate set at 0.001. For multiple imputation the genotype was simulated with 1000 simulation replicates, step length of 0.5 cM and error probability of 0.001. Significance thresholds for LOD scores were obtained by permutation test, with 1000 permutations using Haley-Knot-regression. Animals included in the QTL analysis for the number of nigral DNs were 96, for swellings per remaining nigral DN were 94 and for swellings average size were 104. Multiple-QTL analysis The single-QTL analysis is based on the assumption that there is only one QTL, and the scan is performed at one locus at a time. To increase the power to detect QTLs with additive or epistatic effects, we performed multiple-QTL analysis. The multiple QTL models were fitted starting with the locus with the highest LOD score in the single-QTL model. The models were iteratively built by scanning for interactive and additive loci using addqtl with Haley-Knot regression. Fitqtl was used to fit the models. Loci and interactions with a p-value less than 0.05 in the drop-one-term ANOVA were kept in the model and used in scanning for additional loci. Genotype probabilities were calculated with a step length of 0.1 cM and error probability of 0.001. To estimate the positions of QTLs in the model, we calculated the approximate 95% Bayes credible intervals . Significance thresholds for the multiple QTL models were estimated by permuting randomly selected positions 5000 times and taking the 95th percentile LOD score. This was done for the full model LOD scores as well as for individual QTL LOD scores in the model. The full model significance threshold thus is a measure of significance of the full models, while the QTL LOD score significance threshold gives an estimation of the contribution of a specific QTL to the respective model. Statistical Analyses Statistical tests for the quantification of TH-positive DNs in the SNpc and axonal swellings in the striatum and correlations between those were performed using GraphPad Prism software (version 6, GraphPad, La Jolla, CA). Differences between all seven groups used for stereological estimation and the three groups used for axonal swellings quantification were analyzed using a one-way ANOVA with Tukeys’s multiple comparisons test; statistical significance was set at p- value < 0.05 and values are expressed as mean ± standard deviation (SD). Correlation analyses were performed using the Pearson correlation coefficient (r), statistical significance was set at p- value < 0.05, and a 95% confidence interval was used. All procedures described were approved by the Ethical Committee for the use of laboratory animals in the Lund/Malmö region and were conducted in accordance with the relevant guidelines and regulations. A schematic drawing of the breeding strategy is shown in . The En1+/− strain was generated as described earlier and bred on the SwissOF1 background (Charles River). To generate an F2 population, C57BL/6NCrl (C57Bl/6) males (Charles River) were crossed with SwissOF1-En1+/−females. From the F1 generation, En1+/+ males were crossed with En1+/− females to produce the F2 generation. In total, 129 F2-En1+/− and 57 F2-En1+/+ males were sacrificed at 17 weeks of age. In order to study C57Bl/6 mice with a single allele knockout of En1, the disrupted En1 locus was transferred from SwissOF1-En1+/− to the C57Bl/6 background with a speed congenic approach consisting of repeated backcrossing to C57Bl/6 females (Charles River) with marker-assisted selection . The backcross started with an F2-En1+/− male. In each generation, En1+/− male mice were subjected to single nucleotide polymorphism (SNP) analysis (Illumina Golden Gate assay ) to estimate the fraction of C57Bl/6 background in the genome. The En1+/− male with the highest number of C57Bl/6-alleles was kept for back-crossing with C57Bl/6 females to produce the next generation. The phenotyped C57Bl/6-N4 generation had an average of <3% SwissOF1 alleles outside the En1 locus. Ear punches, tail tips or brain tissue were incubated in 0.5 mL lysis buffer (trizma base (1 M, pH 8.5), edetic acid (0.5 M), sodium dodecyl sulfate (10%), sodium chloride (5 M) and Milli-Q water (Millipore Corporation) and 2.5 μL Protease K (20 mg/ml) at 55 °C while shaking at 600 rpm for either 1 h for ear or 2 h for tail and brain biopsies. Once the tissue was lysed, it was centrifuged at 14,000 rpm at 4 °C for 10 min. The supernatant was transferred to 0.5 mL ice-cold isopropanol, and the DNA was precipitated by gently shaking the tubes. At this point, the samples were centrifuged again at 14000 rpm at 4 °C for 10 min, then all liquid was removed and tubes were put to dry in a ventilated hood at room temperature (RT) for at least 1 h. Once dry, 100 μL Milli-Q water was added, and the samples could then be incubated at 37 °C overnight to dissolve the pellet. The DNA was subsequently used for both genotyping of En1 and genome-wide SNP genotyping. To identify the single allele knockout of En1, we performed PCR with primers for LacZ, the gene used for deleting En1 allele. The PCR was performed by mixing 2 μL DreamTaq Green Buffer (ThermoScientific), 0.5 μL dNTP, 13.3 μL Milli-Q water, 1 μL LacZ Forward primer (5′-TGT ATG AAC GGT CTG GTC TTT G-3′, 10 μM), 1 μL LacZ Reverse primer (5′-AAC AGG TAT TCG CTG GTC ACT T-3′, 10 μM), 0.2 μL Taq polymerase (# EP0702, Thermo Scientific) and 2 μL genomic DNA (10 ng–1 μg). The PCR product was 128 bp long and was identified by gel-electrophoresis on a 2% Agarose gel. We also applied the SsoAdvanced tm SYBR ® Green Supermix (Bio-Rad) for genotyping of LacZ. By performing qPCR with melting curve assay at the end, we could assess the presence of a product. The qPCR was performed by mixing 10 μL of SsoAdvanced tm SYBR® Green Supermix, 0.6 μL of LacZ forward and 0.6 μL LacZ reverse primers, 50 ng–5 pg DNA, and Milli-Q water for a total volume of 20 μL. After perfusion and brain harvesting (as described below), each cerebellum piece was incubated while shaking in dark conditions over-night at RT in 2 μL MgCl 2 , 25 μL X-gal (40 mg/μL), 10 μL K 3 Fe(CN) 6 (0.5 M), 10 μL K 4 Fe(CN) 6 (0.5 M) and 0.955 mL PBS-T (0.3%). At 17 weeks of age (+/− 3 days) mice were sedated by intraperitoneal injection of 0.2 mL sodium pentobarbital (40 mg/μL), before being perfused through the ascending aorta with ice-cold saline (0.9% NaCl) for 3 minutes. After isolating the brain, the cerebellum was sliced off and post-fixed in PFA (4%, pH 7.4) for 20 minutes and then transferred to saline for subsequent LacZ staining. The remaining brain was placed in a mouse brain slice matrix and cut sagitally down the midline with a fine razor blade. The left hemisphere was immediately placed in 10–15 mL PFA (4%, pH 7.4) and post-fixed overnight and subsequently cryoprotected in 30% sucrose (in PBS, with 0.01% sodium azide). The left hemispheres of the dissected brains were sectioned coronally on a freezing microtome (Leica SM2010R) at 40 μm. Immunohistochemical stainings were performed on free-floating sections. The SNpc sections, were given an initial antigen-retrieval incubation in Tris/EDTA (pH 9.0) at 80 °C for 45 min. All the sections were quenched with 3% H 2 O 2 /10% MetOH for 30 min then blocked with 5% Normal Horse Serum (NHS) for SNpc and Normal Goat Serum for Striatum before overnight RT incubation with primary antibody (mouse anti-TH 1:10000 for SNpc, Immunostar, Wisconsin for SNpc; rabbit anti-TH 1:4000, Millipore, California for Striatum). On the second day, sections were incubated with the corresponding biotinylated secondary antibody (horse anti-mouse 1:200; goat anti-mouse 1:200, Vector Laboratories) for 1 h at RT. This was followed by a 30-min incubation with an avidin-biotin peroxidase solution (ABC Elite, Vector Laboratories), and the antigen was visualized using 3,3-diaminobenzidine (DAB) as a chromogen. Sections were mounted on glass slides, dehydrated with increasing concentrations of ethanol and pure xylene, and finally coverslipped using DPX mounting medium (Sigma-Aldrich, Gillingham). Sections with uneven, blurry, not penetrating staining were excluded from analyses. Stereology was performed according to the optical fractionator principle in order to quantify the total number of tyrosine hydroxylase-positive DNs in the SNpc. We used Leica microscope connected to digital camera (Leica MPS52) employing Stereo Investigator software (MBF Bioscience). Every third section (section sampling fraction, ssf = 3) of the midbrain region (Bregma −2.70 to −3.78 ) was analyzed , which yielded between 9–11 sections per animal. Tracing regions of interest (ROIs) was done using the 5X/0.11 lens, and counting was performed with 100X/1.30 lens. The average mounted section thickness (h) was 23.6 μm (+/−2.6) and no guard zones were used due to variable in section thickness (thickness sampling fraction, tsf = 1). Section thickness was measured at every fourth site while counting and the area-sampling fraction (asf) was on average 0.0883. Dissector volume (h*A frame ) was 60,500 mm 3 on average, and the average number of DNs counted in each individual was 234 (+/−48). A maximal Gundersen coefficient of error (CE) of 0.08 was accepted, and the smoothness factor (m) of 1 was used. The following morphological criteria had to be fulfilled in order for the cell to be included: the cell body was highly defined with a visible nucleus due to fewer TH particles available; a darker stained cell may have not meet this criteria, but its projections were distinctly visible making it clear that the stained particle is a cell . 29 of 129 F2-En1+/− and 26 of 57 F2-En1+/+ were excluded from the analysis due to complications with tissue processing, leaving 100 F2-En1+/−and 31 F2-En1+/+ for quantification. A genotype-blind operator performed the stereological assessment. To test for normal distribution of cell numbers among F2 generation, the Shapiro-Wilk normality test in R (3.0.2) was used. Three to five consecutive sections from each animal at bregma distance 0.72–0.92 mm were stained and analyzed. High-resolution 25x pictures were taken using the same microscope, camera and software as for the stereology. Four pictures were taken of precisely delineated ROIs spanning an area of about 3.6 mm 2 representing the dorso-lateral part of the caudate putamen of the striatum, the main functional regions of the nigrostriatal pathway , for every section . Image J was used to identify the swellings and calculate their total number, as well as their size, by setting an exclusion threshold for particles <3 μm 2 . The average number and size of swellings were calculated based on all pictures from one animal. Nine of F2-En1+/− samples were excluded due to low quality of staining. A genotype-blind operator performed image acquisition and processing. The SNP&SEQ technology platform at Uppsala University performed the SNP genotyping. Illumina Mouse Low Density (LD) Linkage Panel, containing 377 SNPs, was selected from the Wellcome-CTC Mouse Strain SNP Genotype Set ; it was used with the Golden Gate Genotyping Assay protocol . To find parental-specific alleles, the genomes of C57Bl/6 and SwissOF1 founders were genotyped. For an SNP to be parental-specific, it should vary between C57Bl/6 and SwissOF1, but not within the population of outbred SwissOF1 used for intercrossing. Following these criteria, 126 of the 377 SNPs were parental-specific. Among the F2-En1+/− mice, one individual had missing genotype data at all markers and was removed from the analysis. No other individuals lacked a notable number of markers and none of the markers had missing genotypes for a notable number of individuals . Due to the selection of F2-En1+/− individuals,markers adjacent to the En1 locus (chr. 1, 77–162 Mb) showed distorted segregation patterns, with significant deviation from the expected 1:2:1 distribution, and markers with p-value (<1E-5) were removed from the analysis . No other markers had a significantly distorted allele segregation pattern according to chi-square test for Mendelian segregation . We thus have a good genomic coverage for linkage analysis, but the model with heterozygous transgene selection prevents detection of potential QTLs near the En1 locus. The data were analyzed using R/qtl (v1.34–17) to identify gene regions linked to neurodegeneration. Scanone was used for single QTL analysis. For expectation-maximization (EM) and Haley-Knott methods, the genotype probabilities were calculated with 0.5 cM distance and a genotyping error rate set at 0.001. For multiple imputation the genotype was simulated with 1000 simulation replicates, step length of 0.5 cM and error probability of 0.001. Significance thresholds for LOD scores were obtained by permutation test, with 1000 permutations using Haley-Knot-regression. Animals included in the QTL analysis for the number of nigral DNs were 96, for swellings per remaining nigral DN were 94 and for swellings average size were 104. The single-QTL analysis is based on the assumption that there is only one QTL, and the scan is performed at one locus at a time. To increase the power to detect QTLs with additive or epistatic effects, we performed multiple-QTL analysis. The multiple QTL models were fitted starting with the locus with the highest LOD score in the single-QTL model. The models were iteratively built by scanning for interactive and additive loci using addqtl with Haley-Knot regression. Fitqtl was used to fit the models. Loci and interactions with a p-value less than 0.05 in the drop-one-term ANOVA were kept in the model and used in scanning for additional loci. Genotype probabilities were calculated with a step length of 0.1 cM and error probability of 0.001. To estimate the positions of QTLs in the model, we calculated the approximate 95% Bayes credible intervals . Significance thresholds for the multiple QTL models were estimated by permuting randomly selected positions 5000 times and taking the 95th percentile LOD score. This was done for the full model LOD scores as well as for individual QTL LOD scores in the model. The full model significance threshold thus is a measure of significance of the full models, while the QTL LOD score significance threshold gives an estimation of the contribution of a specific QTL to the respective model. Statistical tests for the quantification of TH-positive DNs in the SNpc and axonal swellings in the striatum and correlations between those were performed using GraphPad Prism software (version 6, GraphPad, La Jolla, CA). Differences between all seven groups used for stereological estimation and the three groups used for axonal swellings quantification were analyzed using a one-way ANOVA with Tukeys’s multiple comparisons test; statistical significance was set at p- value < 0.05 and values are expressed as mean ± standard deviation (SD). Correlation analyses were performed using the Pearson correlation coefficient (r), statistical significance was set at p- value < 0.05, and a 95% confidence interval was used. How to cite this article : Kurowska, Z. et al . Identification of Multiple QTLs Linked to Neuropathology in the Engrailed-1 Heterozygous Mouse Model of Parkinson’s Disease. Sci. Rep. 6 , 31701; doi: 10.1038/srep31701 (2016). Supplementary Information
Pharmacoepigenetics in type 2 diabetes: is it clinically relevant?
d3e64d9d-18ae-477b-8f26-114f31825769
9522755
Pharmacology[mh]
ESM 1 (PPTX 350 kb)
Insights into water insecurity in Indigenous communities in Canada: assessing microbial risks and innovative solutions, a multifaceted review
7881600d-cbe2-4b10-9fd6-1bec23d9f649
11493031
Microbiology[mh]
Canada contains the fourth largest freshwater reserve in the world, only surpassed by Brazil, Russia, and the United States of America . Although the vast majority of Canadians have access to piped potable water supply (as expected in a freshwater-rich country), numerous Indigenous reserves across Turtle Island struggle with household water insecurity . Household water insecurity can be defined as the absence of safe, reliable, and affordable water , and systemic disparities in a high-income country such as Canada make this scenario plausible. Despite the alleged efforts from the federal government to eliminate all long-term drinking water advisories in Canada by March 2021 , three years after the deadline, the issue still persists. Water advisories are issued by local authorities to communities to warn about water consumption or completely restrict its usage. It is estimated that in Canada there are more than 100 short-term drinking water advisories at any given time . As of September 2024, there are more than 30 long-term drinking water advisories remaining in Indigenous reserves . Water insecurity in Indigenous reserves is a multidimensional matter that started with the colonial past and continued with the deterioration of water, the environment, and other complex socio-economic structures existing in Canada . In this multifaceted review, we first explore the water advisories remaining in Indigenous reserves in the country. Moreover, we analyze additional factors that contribute to water insecurity in First Nations (FN) communities such as cisterns, water storage facilities, pathogenic bacteria, and other genetic elements such as antibiotic-resistance genes that have been reported in water locations of some Indigenous reserves. We also review the negative effects of high levels of natural organic matter (NOM) in drinking water systems (DWS). Moreover, we discuss some methods available for drinking water treatment and NOM removal such as coagulation, high- and low-pressure membrane filtration, ozone, Biological activated carbon filters, Ion exchange, and Biological Ion exchange. Additionally, we address the challenges of applying each of these methods in Indigenous reserves. Furthermore, we briefly explore the drinking water facilities available in Indigenous communities of other high-income countries such as the United States of America (USA), Australia, and some Nordic regions such as Greenland (Denmark), Finland, Norway, and Sweden. Finally, we review high throughput tools such as metagenomics, culturomics, and microfluidics devices that can provide important information regarding microbial communities and microbial pollutants present in source water and DWS in Indigenous reserves. This investigation synthesizes existing literature and Canadian public records concerning the prolonged state of water insecurity in Indigenous reserves to promote comprehension of the microbiological and chemical risks that this represents for public health. Several factors influence water quality including the source water quality, the drinking water treatment applied, the water distribution system, and water storage tanks. In the past few years, notable investment from the federal government of Canada has led to solving 145 long-term drinking water advisories in Indigenous reserves . However, some communities haven’t had access to safe water for almost 30 years . This review aims to spot the challenges and consequences of inadequate DWS and the available technical and microbiological alternatives to address water sanitation coverage in Indigenous reserves of Turtle Island. In a high-income country like Canada, water insecurity tends to get overshadowed with more than 98% of the population having water sanitation coverage . With this study, we want to raise awareness for the minority facing water insecurity advocating for the prioritization of resources to ensure they have access to an internationally recognized human right which is access to safe water and sanitation . While we restricted the scope of this review to technical and microbiological issues, we acknowledge that water insecurity is the result of socio-cultural, systemic disparities, historic and institutionalized marginalization of Indigenous people, and an ineffective and segregated system of water governance that applies to FN reserves . In this multifaceted review, we discuss technical innovations and advanced molecular approaches that could be instrumental in addressing and ultimately overcoming long-term drinking water advisories. Presently, water quality monitoring practices assess fecal contamination by detecting indicator bacteria, including coliforms like Escherichia coli ( E. coli ) and enterococci. These indicator microorganisms were established more than a century ago, and they have been proven ineffective in assessing water for other microorganisms such as viral, or protozoan pathogens . Advanced molecular tests reviewed in this article can complement the assessment of other pathogens and indicator microorganisms present in drinking water to help ensure safe water in Indigenous reserves. This article is one of the few that reviews both technical and advanced molecular tools that could be applied in remote Indigenous communities and is useful for researchers involved in the fields of environmental sciences, microbiology, water treatment engineering, and Indigenous studies. Database search and literature screening Federal and provincial government portals were used to review the status of water advisories in Indigenous communities of Canada as well as the recommended guidelines for NOM and drinking water in the country. The federal government web pages included in this review are “Indigenous Services Canada”, “Environment and Climate Change Canada”, “Health Canada”, “Statistics Canada”, “Employment and Social Development Canada”, “Natural Resources Canada”, “Canada’s Chief Public Health Officer”, “Office of the Auditor General of Canada”, “Standards Council of Canada”, “Province of Manitoba” and “Government of Ontario”. Three official international government portals were reviewed to examine the drinking water guidelines of the countries mentioned in this study including “National Health and Medical Research Council” for Australian drinking water Guidelines, “U.S. Environmental Protection Agency” for the US drinking water regulations and the “European Commission” for the Eropean Union Drinking Water Directive. Furthermore, PubMed and Google Scholar databases were used to identify relevant literature. To ensure a comprehensive exploration of the topics, a web search was conducted including the following keywords: “Indigenous communities” or “Indigenous reserves” or “First Nations” combined with “Canada”. The terms “water insecurity” and “natural organic matter” and “drinking water” and “Drinking water advisories” or “boil water advisory” or “drinking water systems” or “drinking water treatments” or “drinking water facilities” were also used. Finally, to include the microbiological approaches reviewed the terms: “Microbiology” or “Uncultivable bacteria” or “Metagenomic” or “DNA sequencing” were used. Portions of this text were previously published as part of a preprint ( https://doi.org/10.31219/osf.io/w5hxy ). Federal and provincial government portals were used to review the status of water advisories in Indigenous communities of Canada as well as the recommended guidelines for NOM and drinking water in the country. The federal government web pages included in this review are “Indigenous Services Canada”, “Environment and Climate Change Canada”, “Health Canada”, “Statistics Canada”, “Employment and Social Development Canada”, “Natural Resources Canada”, “Canada’s Chief Public Health Officer”, “Office of the Auditor General of Canada”, “Standards Council of Canada”, “Province of Manitoba” and “Government of Ontario”. Three official international government portals were reviewed to examine the drinking water guidelines of the countries mentioned in this study including “National Health and Medical Research Council” for Australian drinking water Guidelines, “U.S. Environmental Protection Agency” for the US drinking water regulations and the “European Commission” for the Eropean Union Drinking Water Directive. Furthermore, PubMed and Google Scholar databases were used to identify relevant literature. To ensure a comprehensive exploration of the topics, a web search was conducted including the following keywords: “Indigenous communities” or “Indigenous reserves” or “First Nations” combined with “Canada”. The terms “water insecurity” and “natural organic matter” and “drinking water” and “Drinking water advisories” or “boil water advisory” or “drinking water systems” or “drinking water treatments” or “drinking water facilities” were also used. Finally, to include the microbiological approaches reviewed the terms: “Microbiology” or “Uncultivable bacteria” or “Metagenomic” or “DNA sequencing” were used. Portions of this text were previously published as part of a preprint ( https://doi.org/10.31219/osf.io/w5hxy ). Peer-reviewed publications, theses, and government portals addressing the following topics were retained for the final report: Water insecurity in Indigenous communities in Canada, DWS in Indigenous communities, Drinking water treatments, NOM, water storage tanks in Indigenous communities in Canada, microorganisms, metagenomics for water monitoring, culturomics, and microfluidics. A total of 169 scientific articles matched the topics explored. Eighteen government portals (15 national, three international) were included in the final review. The four major dimensions discussed include: • Water advisories. • Microbiological, chemical, and natural causes contributing to water insecurity. • Limitations of applying urban-style drinking water systems in Indigenous reserves in Canada and the management of DWS for Indigenous communities in other high-income countries. • The importance of determining the microbiome inhabiting drinking water systems and the cutting-edge technology available for its analysis. Synthesis of results and discussion Water advisories in Canada In Canada, the forced relocation of Indigenous people caused the DWS in Indigenous communities to be placed in remote locations . Therefore these DWS are susceptible to irregular connectivity, a limited number of qualified personnel on site, and generally depend on sporadic water operation and maintenance . Moreover, several DWS on reserves have been reported to lack modern infrastructure and are in urgent need of significant upgrades . Additionally, in Indigenous reserves, water delivery, and maintenance responsibilities are shared between the Federal Government, specifically Indigenous Services Canada and Health Canada with FN community leadership groups . This shared administration and fragmented responsibilities has led to divergences in drinking water regulations in Indigenous reserves . In addition, Indigenous communities’ dependence on federal funding to improve DWS, along with the slow and often delayed response from the government within a segregated and flawed system of governance, also contributes to Indigenous reserves experiencing water insecurity . Water insecurity on reserves represents a health threat and could be one of the causes of FN populations having the lowest projected life expectancies across Canada . In the country, most water-related activities are the responsibility of the corresponding authority of that province or territory, and the obligations regarding monitoring water contaminants differ vastly from province to province . In some provinces of the country, when potential hazards are identified in the water sources of public water systems ( i.e., E. coli , Cyanobacteria blooms, disinfection by-products or DBP), local environmental authorities may issue alerts to the public known as “water advisories” to warn about the water consumption or ban its use completely . The length of the advisory could be less than a year (short-term advisory) or longer (long-term advisory) . Depending on the results of water quality, the nature of the water issue encountered, and a risk evaluation approach to the conditions of the place of the DWS, different warnings may be issued by the responsible environmental public health officer . Boiling water advisories (BWA), Do not consume (DNC), and do not use (DNU) are the types of advisories recommended by Environment and Climate Change Canada . BWA, the most common type of advisories, are most of the times precautionary and generally issued when poor water disinfection, deficient filtration, pressure loss in the distribution system, or inadequate maintenance of the equipment used to treat water is recognized . Similarly, DNC (also referred to as “do not drink”) and DNU advisories are recommended during emergencies . For example, catastrophic events, chemical spills, or other pollutants that affect human health after short-term exposure, unexpected changes in the physical characteristics of water, or invasion of undetermined contaminants through cross-connection problems . When the contaminant present in water can alter human health only through ingestion, a DNC advisory is issued. . On the other hand, DNU recommendations are communicated when the existing pollutant has an effect through dermal and/or inhalation contact . Yet, every province and territory uses its own terminology to issue its water quality recommendations and some regions (such as Ontario, Alberta, and some provinces of the Arctic) do not report water advisories in minor drinking water systems . Since 2015, there have been more than 100 long-term water advisories lifted on reserves around the country . Nevertheless, as of September 2024, there are approximately 27 FN with long-term drinking water advisories on public systems on reserves . Astonishingly some of them have been dealing with advisories for almost 30 years. . Regarding short-term BWA, official sources only report the ones located south of the 60 parallel . However, when contemplating all provinces and territories of Canada the estimated number of BWA advisories to be solved might exceed 1,000 cases. . Microbiological and chemical risks of water storage containers in Indigenous reserves In FN communities in Canada, the DWS infrastructure has been reported to be either utterly absent, inappropriate, obsolete, or low quality . The usage of wells, and trucked water to water storage facilities in households such as cisterns when a potable source of water is not found contributes to water insecurity. . The construction materials accepted by the Canadian Standards Association include steel, stainless steel, concrete, reinforced concrete, fiberglass, and polymers ( i.e., polyethylene) . The materials and components used in water storage tanks are critical to maintaining water quality . One example of this is the use of corrosive materials that harbor the development of iron and iron/manganese-oxidizing bacteria such as Gallionella spp . and members of the Siderocapsa genus . Generally, iron-oxidizing ( i.e., Gallionella spp .) and sulfur-reducing bacteria such as Desulfovibrio spp . are responsible for microbial-induced corrosion and biofilm development on unveiled metal surfaces. Although biofilm development in these types of water storage units might represent a class of “blockade” to corrosion, it can also increase the risk of pathogen development that can potentially affect human health . The complex heterogenous microorganisms present in biofilms can communicate through chemical interactions (known as “quorum sensing”) to facilitate multicellular activities, exchange nutrients, and transfer hereditary material . In terms of biofilm growth, particularly, iron water storage tanks have been found to harbor higher total bacteria counts than their counterparts made of plastic . Nevertheless, plastic cisterns have been associated with the presence of unacceptable levels of metals such as lead, aluminum, copper, among others, and even carcinogenic compounds ( i.e., benzene) in potable water . Moreover, high temperatures, changes in pH, and the presence of chloramines can influence the degradation of the polymeric matrix of these types of cisterns and favor the transference of the toxic compounds to stored water . Equally important is to ensure proper closure of the cisterns and avoid leakages . Algae, fungi, protozoa, bacteria, and viruses for example, can enter from windblown dust, debris, and rainwater if the water storage tank is not properly sealed . Additionally, the presence of leakages in the storage unit could allow the introduction of bird feces which are known to carry harmful bacteria such as Salmonella spp. and Campylobacter spp. . The recommendations for these water storage tanks state that testing the water at least once per year is advisable to validate the presence/absence of microbiological pathogens ( i.e., total coliforms and E. coli ) . Nonetheless, some studies have confirmed that the water quality in these water storage tanks is below the stipulated standards . The guidelines for Canadian drinking water quality state that the maximum contaminant level (MCL) for E. coli and total coliforms is 0 CFU/100 mL of water . However, E. coli counts higher than 60,000 CFU/100 mL were detected in drinking water distribution systems in a fly-in FN community in the Island Lake region in the province of Manitoba . Likewise, unacceptable levels of E. coli (>1,000 CFU/100 mL, >900 CFU/100 mL, and >50 CFU/100 mL) were found in piped water and in a fiberglass tank used to store water for consumption in a FN community located in Manitoba (M. Miniruzzaman & M. Uyaguari-Diaz, 2024, unpublished data). Moreover, in the same community, total coliform counts exceeded acceptable detection limits (>1,000 CFU/100 mL, >900 CFU/100 mL, >50 CFU/100 mL, >0 CFU/100 mL and >1 CFU/100 mL) in two different locations in three out of 16 sampling events conducted from April 2023 to September 2024 . Additionally, high heterotrophic counts (>1,000 CFU/100 mL and >500 CFU/100 mL) were reported in both the piped water and the fiberglass tank within the same community (M. Moniruzzaman & M. Uyaguari-Diaz, 2024, unpublished data). Even though heterotrophic bacteria do not represent a direct threat to public health and the counts obtained did not exceed the maximum acceptable concentrations (500 CFU/ mL), these high counts can interfere with E. coli and total coliforms recovery methods . Additionally, lower heterotrophic counts have been associated with better maintenance of the water facilities . Furthermore, the presence of antibiotic resistance genes (ARGs) such as ampC ( β -lactam resistance), mecA (methicillin resistance), and sul1 (sulfonamides resistance) in both, source, and drinking water in reserves of FN Communities have been reported . Moreover, the isolation of pathogenic bacteria such as Legionella pneumophila , responsible for Legionnaires’ disease, associated with high-risk pneumonia has also been found. These microorganisms or genetic elements (such as ARGs, plasmids, and integrons) can infiltrate the water system through compromised plumbing or cisterns exposed to contaminated water. Moreover, biofilms, which begin building up on submerged surfaces within the first week, can further facilitate their spread . The contribution of high levels of NOM in water insecurity In Canada, all public, semi-public, and private DWS are regulated by provincial and territorial authorities, with guidelines tailored to the specific type of source water used. . In potable water sources in the province of Manitoba (where the 4th highest number of indigenous people live), for example, the high levels of dissolved organic carbon or DOC (a component of NOM) often exceed 20 mg/L. However, the typical DOC concentration of surface water quality in the prairies is on average 8–12 mg/L . NOM is generally contained in natural aquatic sources because of the interaction between the disruption of plants habiting in the geosphere and the by-product of bacteria, and eukaryotes ( i.e., algae) . Seasonal changes, runoff of DOC from the land to the source of water ( i.e., in storms) can also influence the levels of NOM in aquatic environments . The presence of NOM in drinking water treatment has been associated with bacterial regrowth and reduced effectiveness in inactivating other microorganisms, such as bacteriophages, Cryptosporidium among others. Organic and inorganic complexes represent a source of energy for either heterotrophic or chemoautotrophic bacteria, respectively . This event allows transportation of both: the hydrophobic organic compounds present in NOM and toxic heavy metals such as copper (Cu), arsenic (As), lead (Pb), mercury (Hg), cobalt (Co), iron (Fe) and chromium (Cr) . When NOM interacts with chlorine used for water disinfection, different halogenations and oxidations result in the formation of DBP which contain potential organic genotoxic compounds. For instance, trihalomethanes (THMs), and haloacetic acids (HAAs) are the DBP regulated in the Guidelines for Canadian Drinking Water Quality. The maximum acceptable concentration within Canada for these compounds is 100 µg/L for THMs and 80 ug/L for HAAs . However, it has been documented that more than 300 water systems in Canada with populations of less than 5,000 people have exceeded the maximum concentration of HAAs permitted . Moreover, the additional costs associated with high levels of NOM are generally because an augmentation of the DOC produces an increase in the coagulation and filtration processes that drinking water treatments execute . This represents a significant demand in coagulants such as aluminum sulfate (Al 2 (SO 4 ) 3 ), polymerized ferrous sulfate (PFS), poly-aluminum chloride (PAC) or chitosan to limit biological growth, and chemicals to adjust the pH of water . Urban drinking water methods and their limitations in application to Indigenous reserves in Canada In Canadian urban settings such as Toronto, Winnipeg, Edmonton, and Regina, the drinking water processes to remove NOM and other water contaminants may include the following phases: • Coagulation/flocculation: to aggregate and grow the size of the particles present and NOM. • Sedimentation: to remove the suspended solids from water. • Ozonation: to decompose NOM into low molecular weight fractions, and chemically destroy microbial cells . • Rapid sand filtration: commonly used to remove biodegradable organic matter, ammonium, and other organic micropollutants. • Addition of chlorine: to inactivate remaining pathogenic and non-pathogenic microorganisms present • UV disinfection: to induce biochemical inactivation of waterborne parasites. As mentioned above, coagulation is the most commonly used method for removing NOM in Canada . Coagulation reduces the repulsion forces and aims to transform dissolved organic matter into neutral particles by adsorption onto aluminum or iron-based coagulants . The particles efficiently accumulate through flocculation to be removed afterward by clarification . The downsides of applying coagulation in remote DWS include high costs and high doses of anticoagulants and other chemicals needed for pH modification . Depending on the water source and the specific conditions of the water to be treated, additional drinking water methods available include: • High-pressure membranes: High-pressure membranes are advanced filtration methods developed to remove dissolved matter from water with a high energy demand. They are generally used in groundwater to reduce salt content, nitrates, and other organic and inorganic micropollutants . Some examples of high-pressure membrane filtration include reverse osmosis (RO) and nanofiltration (NF). RO is a pressure-driven technology that uses a semi-permeable membrane typically made of cellulose and polyamide. This enables the passing of water size particles but blocking all the solids, dissolved matters, colloids, salts, and organic matters with a molecular weight greater than 50 to 100 Da . Likewise, NF uses mostly polymer membranes with larger pore sizes (100-1 nm) than the ones used in RO . The pressure used in NF is lower and thus consumes less energy than RO . The limitations of RO and NF are the high cost of the membrane, maintenance, and biofouling. This method is not easy to implement in Indigenous reserves due to high energy demand (which is commonly not available in remote communities), costs associated with membrane replacement, and highly trained operator requirements . • Low-pressure membranes: Low-pressure membranes are advanced methods for water treatment to remove macromolecules (such as dyes, proteins, polysaccharides) with low energy demand. Generally, these membranes are not capable of eliminating all dissolved organic matter alone because of membrane fouling. Therefore, an appropriate water pre-treatment is required . Some examples of low-pressure membrane processes include microfiltration (MF) and ultrafiltration (UF). These membranes are commonly used after the employment of coagulants. This improves the permeate flux and helps to avoid membrane fouling . Notably, MF and UF are effective for particle and microbial removal, the drawback is that the membranes used for these methods are recognized to be fragile and costly . Additionally, in most cases, only a fragment of the organic components is removed . MF and UF membrane treatments fail to remove significant levels of NOM . Consequently, low-pressure membrane filtration systems are contemplated as pre-treatments of other sophisticated methods such as NF or RO and the cost outweighs their implementation in the DWS of Indigenous communities . • Ozone: Ozone is an advanced oxidation process that is useful for breaking organic chains and detaching aromatic rings in recalcitrant organic complexes along with efficient microorganism inactivation . • Biological activated carbon (BAC). BAC filters are a type of water treatment that combine the adsorption of activated carbon with the biodegradation of microorganisms to purify water. In this type of filter, activated carbon is used as a carrier that supports the growth of microorganisms, which subsequently degrade organic compounds . Ozone is often paired with BAC filtration. BAC filters following ozonation have shown significant effectiveness in removing NOM, ozonation by-products, DBP precursors, as well as taste and odor compounds . There are more than 800 water facilities using ozone as part of their water purification system across Canada . Some disadvantages of ozone treatment, depending on the water matrix (surface water or wastewater), and/or residual pollutants, are the incomplete degradation of organic substances which results in the generation of pathogenic by-products ( i.e., brominated organics, aldehydes, and carboxylic compounds) . Furthermore, as a result of ozonation, DOC levels could increase because of the liberation of extracellular organic matter and other proteins and polysaccharides . Moreover, high ozone doses are required for the successful inactivation of microorganisms . Lastly, the lifetime of ozone is brief and needs on-site production which is not an option for water systems on Indigenous reserves due to their remote locations . Regarding the limitations of BAC systems, their gradually decreasing absorption capacity over time and low DOC removal should be noted . When dissolved organic matter and other complex compounds are not adequately removed during the water treatment, pathogenic microorganisms can proliferate inside the water distribution system . Without proper maintenance, this method represents a health risk for the consumers in addition to poor water quality . • Ion exchange (IEX). IEX is an electrochemical method where ions from the water body are swapped with ions within resins that contain active centers with acidic or basic groups (electrically charged) . The most common materials used for IEX resins include methacrylic acid, sulfonated styrene, and divinyl benzene (DVB) . Cation exchangers carry sulfate or carboxyl groups and use sodium, potassium, and hydrogen as counterions . Whereas anionic ion exchange resins contain quaternary ammonium groups with chlorine as a counterion . During water treatment for DOC removal, anionic IEX takes place when the negatively charged dissolved organic matter in waters with pH values ranging from 6 to 8 (which is the case of most drinking water sources) has a higher attraction for the ion exchange resin than the ion being traded . The removal of anions releases a Cl − ion from the anionic resin . Anionic IEX can eliminate up to 90 percent of DOC . Additionally, the cost of a traditional IEX system fluctuates around $0.1–$0.2 per 1,000 liters of purified water, which is significantly lower than other membrane-based methods . Removing DOC from water through IEX may be considered a viable, efficient, and affordable alternative for rural systems as it does not require continuous operational capacity or personnel to operate other treatment processes. The drawbacks of IEX implementations include the saturations of resins with chloride counter-ions after 3 to 8 weeks of performance (depending on DOC levels, conditions of operation, and IEX capacity), which requires frequent regeneration . The regenerations produce elevated concentrations of sodium chloride and NOM that demand careful disposal since it can negatively affect the aquatic ecosystem and the plumbing structure of the area if disposed directly into the sewer sheds . Thus, the constant transportation of regenerants (10–12% NaCl solution) represents a clear disadvantage for remote Indigenous Communities . In IEX, the favorable conditions of the macro-porous anatomy of the resin and the presence of high sources of carbon, benefit biofilm development in the membrane in the absence of resin regeneration . The establishment of biofilm was considered undesirable, nevertheless, several recent studies have demonstrated that biological activity found in the IEX resins contribute to the removal of NOM . This high-performance method, now called Biological Ion Exchange (BIEX) enables the saturated membrane of IEX to extend its lifetime without filter maintenance, minimal waste discharge, and low operational cost . BIEX has recently been applied in the drinking water system used by the Middle River community located in central British Columbia. This Indigenous reserve, which is part of the Tl’azt’en Nation, has been under drinking water advisory for over a decade . The implementation of BIEX in this community resulted in high DOC removal, with no need for replacement of the filter in more than 12 months . Remarkably, after the implementation of BIEX in this community, the drinking water advisory was lifted . Although there is not yet a consensus reached regarding the percentage of DOC removed by the biological activity of the resins used in this method, strong evidence suggests that biodegradation may significantly influence DOC levels . This method can be applied in complementation with UV disinfection, an advanced oxidation process, and chlorine for NOM elimination . Further research is, however, required to elucidate the dynamics of biofiltration in NOM removal through IEX. It is critical to identify accessible technologies capable of removing NOM and treating source water in DWS of Indigenous communities, particularly for those under long-term drinking water advisories. This will ensure compliance with water quality standards and safeguard the health of all users. Drinking water treatments in Indigenous communities of other high-income countries Although information on DWS in Indigenous communities worldwide is limited, reports indicate that many of these communities’ face challenges similar to those in Canada. Using untreated groundwater, storage, and transportation facilities such as cisterns and water trucks seems to be a common denominator. In the USA, reports revealed that approximately 12% of Native Americans including Alaska Native communities do not have basic sanitation facilities . In the Navajo Nation, one of the largest Indian reservations in this country, studies evidenced that the DWS had a very basic infrastructure consisting of wells, pressure, and distribution tanks operated by the owners. In this Nation, around 30% lacked piped water . A high number of “tribal facilities” exceeded the safe limits established by the US EPA drinking water regulations for fecal bacteria and heavy metals including arsenic and uranium . In Australia, most Indigenous people rely on untreated groundwater that has not met the recommended limits for chemical and microbiological parameters by the Australian drinking water guidelines . In Greenland (Denmark), where more than 80% of the population is Indigenous , source water is treated with sand filters, UV and/or chlorination. To transport water to houses without piped water (approximately 10%) water trucks bring water inside households’ storage units. Another alternative is the existence of tap houses where people fetch water . Although records are limited or absent in small DWS in Greenland, some reports have detailed water facilities not meeting the microbiological European Union drinking water regulations (EU DWR), and BWA are frequent . Within the same context, Indigenous populations such as the Sámi people who are mostly distributed (∼95%) in Northern parts of Norway, Sweden, and Finland face challenges related to water and land . Although specific reports on the DWS infrastructure used in the communities of the Sámi people are not available, most small DWS in these Nordic countries use groundwater as their main water source . In Finland and Norway for example, small DWS typically have dug wells or boreholes. Generally, water from these DWS does not go under any treatment before distribution unless the source water has been demonstrated not to be “well protected” . While the water quality has been cataloged to meet the minimum standards of the EU DWR, there are reports indicating water fecal contamination with E. coli (which doesn’t overrule the presence of other pathogens) in DWS of both countries . In small DWS of Sweden, untreated groundwater is also distributed unless high levels of iron and manganese are found. If water treatment is required, water is aerated, filtered with rapid sand filtration, and disinfected. Sometimes, groundwater also goes through activated carbon filters . The monitoring of water supplies has been reported to be either absent or insufficient in Greenland (Denmark) and Sweden, respectively. The data reveal the vulnerability and ineffectiveness of the DWS that Indigenous communities face around the world. Importance of determining the microbiome inhabiting DWS One of the main goals of drinking water treatments in Canada is to remove NOM since its presence is problematic because of the potential formation of DBP , contribution to biofilm growth , and high operational costs as previously reviewed. The use of biological-activated methods in drinking water treatment plants has proven to have a positive impact on the degradation of a fraction of NOM . Even though some micropollutants need specialized electrochemical degradation , the effective depletion of contaminants including pharmaceuticals, personal care products, endocrine disrupting compounds, arsenic, manganese, and ammonium, has been well reported . In this context, biofilters assist in the reduction of membrane fouling, color and odor constituents, disinfectant doses, and DBP precursors . A better understanding of the physiological, metabolic, biochemical, and ecological characteristics of the microbial networks found in DWS can be achieved by the identification of their genome . Understanding the growth conditions, symbiotic relationships, pathogenicity risks, and habitat preferences of the microorganisms present in DWS and raw water is critical for making water safe. The aforementioned factors also provide insights to alter the conditions surrounding the microbiota and modulate the development of biofilms . Regulating the microbial communities in biologically activated filters used in DWS could lead to the development of advanced approaches that positively impact various aspects of water safety. For instance, monitoring the microbiome in both water filters and source water could facilitate the effective prevention of high-risk pathogens . In addition, when using BAF, efforts should be made to provide an ideal working environment for the microorganisms contributing to the degradation of water contaminants . These efforts will allow the recognition of the correlation between microbial community structure and biofilter function . The customization of the microbial community present in BAF according to the definitions proposed by includes 3 approaches: • Bioaugmentation: with the inoculation of pivotal endogenous or exogenous microorganisms from an enriched source to the one of interest to accelerate the normal microbial establishment process and fasten biodegradation . • Amendment: that refers to the adjustment of exogenous compounds required for biological activity such as nutrients and oxidants ; and • Supplementation: of endogenous substances to achieve higher amounts than the ones contained naturally within the filters . Although not considered in most operative water systems using BAF, the enhancement of biological activities has demonstrated a significant improvement in different aspects of the water filtration process . For example, in a bioaugmentation experiment conducted by , enriched nitrifying bacteria from an operational sand filter was transferred to a novel filter, resulting in an acceleration of the development of key nitrifiers in the filters . This approach enables speeding the oxidation of ammonium in groundwater, a process that under normal conditions would take several months (Pinto et al., 2016; ; ). In another study conducted by , the amendment of nutrients and peroxide resulted in filter life extension as well as a breakthrough decrease of DOC and other undesirable components . In contrast, these improvements were not observed with other BAF systems used without this enhancement . Moreover, in research conducted by , it was proposed that supplementation of phosphorus in the biofilters only when P levels are limited helps to ameliorate biofilter hydraulics . In natural conditions, phosphorus is low, due to removal through conventional treatments ( i.e., coagulation/flocculation) . The deficit of P contributes to an increase in the extracellular polymeric substances of the biofilm which has been correlated with bio-clogging and less filter durability . Therefore, more efforts should be considered to determine the microbiome established in the biofilters to increase their durability and secure correct functioning in DWS. Identifying the microbiome present in biological water filters represents the cornerstone for finding putative functions associated with these microorganisms. Thereby, any attempt of amendment, supplementation, and/or bioaugmentation to improve the biodegradation of any organic or inorganic component, should be complemented with the microbial fingerprints present in the biofilter to link activity and symbiosis of the microbiome . Metagenomics Culture-dependent methods have been widely used as the technique of reference for controlling the presence of pathogenic organisms in water. However, these methods do not provide significant information regarding the total microbial diversity and its changes in source water and DWS . Moreover, some commercial kits used for fecal bacteria quantification in water ( i.e., compartment bag tests) have been demonstrated to underestimate the concentration of pathogenic bacteria. , which increases the risks of false negative results. Besides, most bacterial cells present in the water as well as in BAF and DWS are not culturable or are out of the range of detection of commercial tests . To obtain information about the type, abundance, function, pathogenicity, and metabolic requirements of the drinking water microbiome, culture-independent or molecular methods are the approach of choice . These high-throughput methods can provide more detailed information to monitor BAF, enabling targeted modifications that can enhance the performance of these filters in DWS . Moreover, the rapid decline in the cost of these sequencing tools makes them conveniently affordable, even for DWS in Indigenous reserves . Up to date, Metagenomics, which involves studying all nucleic acids from a specific sample, has become the preferred method for microbiome analysis, especially using targeted or amplicon-based metagenomic approaches . One of the main reasons for these preferences is the high level of conservation and hyper-variability of the genomic marker used, which allows for the identification of different species . The multiple sets of DNA sequences identified during high-throughput sequencing can be used to evaluate the taxonomy of the microorganisms present in drinking water and biofilter microbial communities . For instance, deep amplicon sequencing of 16S rRNA and chaperonin 60 or cpn60 (mitochondrial protein) has been used for identifying bacterial communities in source water and DWS . Other microorganisms such as eukaryotes and fungi have also been described in aquatic environments using 18S rRNA and Internal Transcribed Spacer (ITS) . Similarly, characterization of the viruses found in drinking water has been attempted by studying specific viral groups using biomarkers such as RNA-dependent RNA polymerase (RdRp) for RNA virus, gene 23 (g23), and gene 20 (g20) for DNA virus, among others . Despite the virome characterization obtained using the mentioned viral biomarkers, approximately 50% of viral hits remain unknown when searched in public databases . The hassle relies on viruses lacking “universal” gene markers which makes the identification of abundance patterns and community structure for all viruses a challenge . Despite this, several viruses have been associated with human fecal contamination such as Norovirus, Enterovirus, Rotavirus, Hepatovirus A, Pepper mild mottle virus (PMMoV), crAssphage, and human adenovirus (HAdV), among others . Additionally, depending on the source of water, animal-specific enteric viruses can also serve as fecal indicators, such as the case of porcine and bovine adenoviruses’ ((PAdVs) and (BAdVs), respectively) as well as bovine enterovirus (BEV) and Bovine polyomavirus (BPyV) . . summarizes the different viruses and other microorganisms used to assess fecal contamination in aquatic environments. The prevalence of these microorganisms has been identified in multiple water environments , for this reason monitoring source water, DWS and biofilters with this technology can help ensure adequate water quality for all individuals. In addition to targeted metagenomics, another culture-independent approach is shotgun metagenomics, in which the contents of the complete genome are studied . In this approach, long DNA molecules belonging to the microorganisms present in the ecosystem under study, break into random fragments that are sequenced afterward . Shotgun sequencing examines all metagenomic DNA instead of only the hypervariable regions . When the nucleotide detection of target DNA molecules is direct, the efficacy of data analysis increases, and the coverage regions of phylogenetical relevance are extended . The average prices of targeted and shotgun metagenomics can be found in . Even though these tools have the potential to provide important information regarding the microbial populations present in source water, DWS and biofilters, there is still evidence of a vast number of uncategorized, uncharacterized, and unclassified environmental microorganisms . For this reason, complementary methods should be used to fill in the existing taxonomic “blind spots” that separate the field today from vital and novel phylogenetic information. Culturomics Culturomics is a high-throughput culture approach that can be used to overcome the limitations of unclassified environmental bacteria that metagenomics faces . This technique was originally introduced for the study of microbiota in the human gut . This method combines matrix-assisted laser desorption/ionization-time of flight (MALDI–TOF) mass spectrometry with 16S rRNA sequencing to identify novel, viable bacterial colonies . Its principle of improving culture media with the precise conditions these microorganisms require for their growth can help fill in the gaps of the so-called “uncultivable” bacteria of aquatic environments. In culturomics, diverse adjustments are applied to the incubation conditions (temperature, incubation time, media enrichment, pH, oxygen demand, and so forth) to promote the growth of otherwise uncultivable bacteria . Consecutively, they are cleaned and prepared for the MALDI-TOF method, and if the taxonomic credentials in the database are not found, a supplementary amplification and sequencing using 16S rRNA is conducted . The transcendental results obtained from the human gut microbiome (247 new species of bacteria and their genomes unveiled) make this method a suitable candidate to complement metagenomics for the uncategorized phyla found in source water and DWS . Conversely, the limitations of culturomics lie in the inability to identify species that do not count with any genome registration in libraries of reference . Furthermore, the sample-processing capacity is pointedly lower compared to the volume that metagenomics tools can handle in a single day . Microfluidics or Lab-on-a-chip technologies Indigenous and remote communities would significantly benefit from fast, transportable, and on-site sensitive methods to recognize bacterial, viral, and protozoan pathogens in DWS. The term microfluidics refers to the process of small (10 −9 L to 10 −18 L) fluids that circulate into micrometer channels with components like microfilters, microvalves, micromixers, and sensors such as detectors at the cellular and molecular level . Generally, the channel size used in the analytical devices employed in microfluidics oscillates between 10 mm to 200 mm and even one mm in some cases . Lab-on-a-chip (LOC) employs a microsystem where the surface, gravitational, viscous, and other forces integrated with either active or passive microvalves are carefully applied to obtain a real and complete micro-laboratory . Active microvalves require external actuation ( i.e., electromagnetism, thermal expansion) while passive valves base their functioning on the pressure gradient . These (passive) microvalves are generally used for micropumps ( i.e., as check-valves) . tested an insolation chip (ichip) composed of more than a thousand miniature diffusion chambers to inoculate microorganisms from diverse environments including aquatic settings . The application of this technique has resulted in a higher recovery compared to traditional cultivation methods alone . Additionally, the species found differed significantly from the ones recovered in Petri dishes, revealing considerable phylogenetic novelty . The specific functioning, types, and components of microfluidics have been broadly studied in different areas and can be revised elsewhere . Generally, there are more than five types of microfluidic platforms including linear actuated devices, microfluidic large-scale integration, centrifugal microfluidics, segmental flow microfluidics, electrokinetics, surface acoustic waves, pressure-driven laminar flow, and lateral flow tests . For instance, centrifugal microfluidics allows the management of more sensitive liquids such as nucleic acids . Some examples of the application of centrifugal microfluidics include DNA extraction and nucleic acid–based assays, protein crystallization and protein-based assays, integrated plasma separation, clinical chemistry assays, and chromatography tests . Similarly, lateral flow tests, have been successfully used to detect infectious agents such as Salmonella spp. , anthrax ( Bacillus anthracis ), viruses, and even small molecules such as antibiotics . Some of the samples used in lateral flow tests include: nucleic acids for B. anthracis , blood serum for Salmonella spp. , nasopharyngeal wash for respiratory syncytial virus Infection (RSV) , milk for tetracycline detection and fecal specimens for Giardia spp. and Cryptosporidium spp. . In lateral flow tests, the capillary-driven process is used to absorb the sample and transport it through all over the test strip. Three types of molecules or antibodies will be present in this mechanism where: • Tagged antibodies for a signal-generating particle in the “conjugate pad” are hydrated with the sample and eventually bound to the antigens contained in the sample. • The sample continues flowing to the incubation and recognition area and meets test line antibodies that bind particles covered with antigens . • On the control line, a third class of antibodies catches the compounds that did not bind with any particle . This latter binding confirms a successful test performance . Likewise, the binding or not binding antibodies in the detection line confirm or deny the presence of the analyte of interest . To the best of our knowledge and based on current literature, these methods have not yet been used to test water microbial quality. In this context, lateral flow tests embody a cost-effective, mobile, and top-notch alternative to identify well-known pathogens in the drinking water systems of Indigenous reserves. The mechanism of action that microfluidics has, could be replicated for the detection of fecal indicator bacteria, viruses, and protozoans in the source water and DWS of isolated areas and Indigenous communities. Nevertheless, the drawbacks of microfluidics include system blockage by small elements, as well as high contamination risks with minimal amounts, and premature absorption of the analytes of interest, among others . Microfluidics platforms represent low-cost, portable, high-precision, time-optimizer tools that would benefit DWS with higher susceptibility to microbiological contamination as is the case of DWS located in remote Indigenous reserves . In Canada, the forced relocation of Indigenous people caused the DWS in Indigenous communities to be placed in remote locations . Therefore these DWS are susceptible to irregular connectivity, a limited number of qualified personnel on site, and generally depend on sporadic water operation and maintenance . Moreover, several DWS on reserves have been reported to lack modern infrastructure and are in urgent need of significant upgrades . Additionally, in Indigenous reserves, water delivery, and maintenance responsibilities are shared between the Federal Government, specifically Indigenous Services Canada and Health Canada with FN community leadership groups . This shared administration and fragmented responsibilities has led to divergences in drinking water regulations in Indigenous reserves . In addition, Indigenous communities’ dependence on federal funding to improve DWS, along with the slow and often delayed response from the government within a segregated and flawed system of governance, also contributes to Indigenous reserves experiencing water insecurity . Water insecurity on reserves represents a health threat and could be one of the causes of FN populations having the lowest projected life expectancies across Canada . In the country, most water-related activities are the responsibility of the corresponding authority of that province or territory, and the obligations regarding monitoring water contaminants differ vastly from province to province . In some provinces of the country, when potential hazards are identified in the water sources of public water systems ( i.e., E. coli , Cyanobacteria blooms, disinfection by-products or DBP), local environmental authorities may issue alerts to the public known as “water advisories” to warn about the water consumption or ban its use completely . The length of the advisory could be less than a year (short-term advisory) or longer (long-term advisory) . Depending on the results of water quality, the nature of the water issue encountered, and a risk evaluation approach to the conditions of the place of the DWS, different warnings may be issued by the responsible environmental public health officer . Boiling water advisories (BWA), Do not consume (DNC), and do not use (DNU) are the types of advisories recommended by Environment and Climate Change Canada . BWA, the most common type of advisories, are most of the times precautionary and generally issued when poor water disinfection, deficient filtration, pressure loss in the distribution system, or inadequate maintenance of the equipment used to treat water is recognized . Similarly, DNC (also referred to as “do not drink”) and DNU advisories are recommended during emergencies . For example, catastrophic events, chemical spills, or other pollutants that affect human health after short-term exposure, unexpected changes in the physical characteristics of water, or invasion of undetermined contaminants through cross-connection problems . When the contaminant present in water can alter human health only through ingestion, a DNC advisory is issued. . On the other hand, DNU recommendations are communicated when the existing pollutant has an effect through dermal and/or inhalation contact . Yet, every province and territory uses its own terminology to issue its water quality recommendations and some regions (such as Ontario, Alberta, and some provinces of the Arctic) do not report water advisories in minor drinking water systems . Since 2015, there have been more than 100 long-term water advisories lifted on reserves around the country . Nevertheless, as of September 2024, there are approximately 27 FN with long-term drinking water advisories on public systems on reserves . Astonishingly some of them have been dealing with advisories for almost 30 years. . Regarding short-term BWA, official sources only report the ones located south of the 60 parallel . However, when contemplating all provinces and territories of Canada the estimated number of BWA advisories to be solved might exceed 1,000 cases. . In FN communities in Canada, the DWS infrastructure has been reported to be either utterly absent, inappropriate, obsolete, or low quality . The usage of wells, and trucked water to water storage facilities in households such as cisterns when a potable source of water is not found contributes to water insecurity. . The construction materials accepted by the Canadian Standards Association include steel, stainless steel, concrete, reinforced concrete, fiberglass, and polymers ( i.e., polyethylene) . The materials and components used in water storage tanks are critical to maintaining water quality . One example of this is the use of corrosive materials that harbor the development of iron and iron/manganese-oxidizing bacteria such as Gallionella spp . and members of the Siderocapsa genus . Generally, iron-oxidizing ( i.e., Gallionella spp .) and sulfur-reducing bacteria such as Desulfovibrio spp . are responsible for microbial-induced corrosion and biofilm development on unveiled metal surfaces. Although biofilm development in these types of water storage units might represent a class of “blockade” to corrosion, it can also increase the risk of pathogen development that can potentially affect human health . The complex heterogenous microorganisms present in biofilms can communicate through chemical interactions (known as “quorum sensing”) to facilitate multicellular activities, exchange nutrients, and transfer hereditary material . In terms of biofilm growth, particularly, iron water storage tanks have been found to harbor higher total bacteria counts than their counterparts made of plastic . Nevertheless, plastic cisterns have been associated with the presence of unacceptable levels of metals such as lead, aluminum, copper, among others, and even carcinogenic compounds ( i.e., benzene) in potable water . Moreover, high temperatures, changes in pH, and the presence of chloramines can influence the degradation of the polymeric matrix of these types of cisterns and favor the transference of the toxic compounds to stored water . Equally important is to ensure proper closure of the cisterns and avoid leakages . Algae, fungi, protozoa, bacteria, and viruses for example, can enter from windblown dust, debris, and rainwater if the water storage tank is not properly sealed . Additionally, the presence of leakages in the storage unit could allow the introduction of bird feces which are known to carry harmful bacteria such as Salmonella spp. and Campylobacter spp. . The recommendations for these water storage tanks state that testing the water at least once per year is advisable to validate the presence/absence of microbiological pathogens ( i.e., total coliforms and E. coli ) . Nonetheless, some studies have confirmed that the water quality in these water storage tanks is below the stipulated standards . The guidelines for Canadian drinking water quality state that the maximum contaminant level (MCL) for E. coli and total coliforms is 0 CFU/100 mL of water . However, E. coli counts higher than 60,000 CFU/100 mL were detected in drinking water distribution systems in a fly-in FN community in the Island Lake region in the province of Manitoba . Likewise, unacceptable levels of E. coli (>1,000 CFU/100 mL, >900 CFU/100 mL, and >50 CFU/100 mL) were found in piped water and in a fiberglass tank used to store water for consumption in a FN community located in Manitoba (M. Miniruzzaman & M. Uyaguari-Diaz, 2024, unpublished data). Moreover, in the same community, total coliform counts exceeded acceptable detection limits (>1,000 CFU/100 mL, >900 CFU/100 mL, >50 CFU/100 mL, >0 CFU/100 mL and >1 CFU/100 mL) in two different locations in three out of 16 sampling events conducted from April 2023 to September 2024 . Additionally, high heterotrophic counts (>1,000 CFU/100 mL and >500 CFU/100 mL) were reported in both the piped water and the fiberglass tank within the same community (M. Moniruzzaman & M. Uyaguari-Diaz, 2024, unpublished data). Even though heterotrophic bacteria do not represent a direct threat to public health and the counts obtained did not exceed the maximum acceptable concentrations (500 CFU/ mL), these high counts can interfere with E. coli and total coliforms recovery methods . Additionally, lower heterotrophic counts have been associated with better maintenance of the water facilities . Furthermore, the presence of antibiotic resistance genes (ARGs) such as ampC ( β -lactam resistance), mecA (methicillin resistance), and sul1 (sulfonamides resistance) in both, source, and drinking water in reserves of FN Communities have been reported . Moreover, the isolation of pathogenic bacteria such as Legionella pneumophila , responsible for Legionnaires’ disease, associated with high-risk pneumonia has also been found. These microorganisms or genetic elements (such as ARGs, plasmids, and integrons) can infiltrate the water system through compromised plumbing or cisterns exposed to contaminated water. Moreover, biofilms, which begin building up on submerged surfaces within the first week, can further facilitate their spread . In Canada, all public, semi-public, and private DWS are regulated by provincial and territorial authorities, with guidelines tailored to the specific type of source water used. . In potable water sources in the province of Manitoba (where the 4th highest number of indigenous people live), for example, the high levels of dissolved organic carbon or DOC (a component of NOM) often exceed 20 mg/L. However, the typical DOC concentration of surface water quality in the prairies is on average 8–12 mg/L . NOM is generally contained in natural aquatic sources because of the interaction between the disruption of plants habiting in the geosphere and the by-product of bacteria, and eukaryotes ( i.e., algae) . Seasonal changes, runoff of DOC from the land to the source of water ( i.e., in storms) can also influence the levels of NOM in aquatic environments . The presence of NOM in drinking water treatment has been associated with bacterial regrowth and reduced effectiveness in inactivating other microorganisms, such as bacteriophages, Cryptosporidium among others. Organic and inorganic complexes represent a source of energy for either heterotrophic or chemoautotrophic bacteria, respectively . This event allows transportation of both: the hydrophobic organic compounds present in NOM and toxic heavy metals such as copper (Cu), arsenic (As), lead (Pb), mercury (Hg), cobalt (Co), iron (Fe) and chromium (Cr) . When NOM interacts with chlorine used for water disinfection, different halogenations and oxidations result in the formation of DBP which contain potential organic genotoxic compounds. For instance, trihalomethanes (THMs), and haloacetic acids (HAAs) are the DBP regulated in the Guidelines for Canadian Drinking Water Quality. The maximum acceptable concentration within Canada for these compounds is 100 µg/L for THMs and 80 ug/L for HAAs . However, it has been documented that more than 300 water systems in Canada with populations of less than 5,000 people have exceeded the maximum concentration of HAAs permitted . Moreover, the additional costs associated with high levels of NOM are generally because an augmentation of the DOC produces an increase in the coagulation and filtration processes that drinking water treatments execute . This represents a significant demand in coagulants such as aluminum sulfate (Al 2 (SO 4 ) 3 ), polymerized ferrous sulfate (PFS), poly-aluminum chloride (PAC) or chitosan to limit biological growth, and chemicals to adjust the pH of water . In Canadian urban settings such as Toronto, Winnipeg, Edmonton, and Regina, the drinking water processes to remove NOM and other water contaminants may include the following phases: • Coagulation/flocculation: to aggregate and grow the size of the particles present and NOM. • Sedimentation: to remove the suspended solids from water. • Ozonation: to decompose NOM into low molecular weight fractions, and chemically destroy microbial cells . • Rapid sand filtration: commonly used to remove biodegradable organic matter, ammonium, and other organic micropollutants. • Addition of chlorine: to inactivate remaining pathogenic and non-pathogenic microorganisms present • UV disinfection: to induce biochemical inactivation of waterborne parasites. As mentioned above, coagulation is the most commonly used method for removing NOM in Canada . Coagulation reduces the repulsion forces and aims to transform dissolved organic matter into neutral particles by adsorption onto aluminum or iron-based coagulants . The particles efficiently accumulate through flocculation to be removed afterward by clarification . The downsides of applying coagulation in remote DWS include high costs and high doses of anticoagulants and other chemicals needed for pH modification . Depending on the water source and the specific conditions of the water to be treated, additional drinking water methods available include: • High-pressure membranes: High-pressure membranes are advanced filtration methods developed to remove dissolved matter from water with a high energy demand. They are generally used in groundwater to reduce salt content, nitrates, and other organic and inorganic micropollutants . Some examples of high-pressure membrane filtration include reverse osmosis (RO) and nanofiltration (NF). RO is a pressure-driven technology that uses a semi-permeable membrane typically made of cellulose and polyamide. This enables the passing of water size particles but blocking all the solids, dissolved matters, colloids, salts, and organic matters with a molecular weight greater than 50 to 100 Da . Likewise, NF uses mostly polymer membranes with larger pore sizes (100-1 nm) than the ones used in RO . The pressure used in NF is lower and thus consumes less energy than RO . The limitations of RO and NF are the high cost of the membrane, maintenance, and biofouling. This method is not easy to implement in Indigenous reserves due to high energy demand (which is commonly not available in remote communities), costs associated with membrane replacement, and highly trained operator requirements . • Low-pressure membranes: Low-pressure membranes are advanced methods for water treatment to remove macromolecules (such as dyes, proteins, polysaccharides) with low energy demand. Generally, these membranes are not capable of eliminating all dissolved organic matter alone because of membrane fouling. Therefore, an appropriate water pre-treatment is required . Some examples of low-pressure membrane processes include microfiltration (MF) and ultrafiltration (UF). These membranes are commonly used after the employment of coagulants. This improves the permeate flux and helps to avoid membrane fouling . Notably, MF and UF are effective for particle and microbial removal, the drawback is that the membranes used for these methods are recognized to be fragile and costly . Additionally, in most cases, only a fragment of the organic components is removed . MF and UF membrane treatments fail to remove significant levels of NOM . Consequently, low-pressure membrane filtration systems are contemplated as pre-treatments of other sophisticated methods such as NF or RO and the cost outweighs their implementation in the DWS of Indigenous communities . • Ozone: Ozone is an advanced oxidation process that is useful for breaking organic chains and detaching aromatic rings in recalcitrant organic complexes along with efficient microorganism inactivation . • Biological activated carbon (BAC). BAC filters are a type of water treatment that combine the adsorption of activated carbon with the biodegradation of microorganisms to purify water. In this type of filter, activated carbon is used as a carrier that supports the growth of microorganisms, which subsequently degrade organic compounds . Ozone is often paired with BAC filtration. BAC filters following ozonation have shown significant effectiveness in removing NOM, ozonation by-products, DBP precursors, as well as taste and odor compounds . There are more than 800 water facilities using ozone as part of their water purification system across Canada . Some disadvantages of ozone treatment, depending on the water matrix (surface water or wastewater), and/or residual pollutants, are the incomplete degradation of organic substances which results in the generation of pathogenic by-products ( i.e., brominated organics, aldehydes, and carboxylic compounds) . Furthermore, as a result of ozonation, DOC levels could increase because of the liberation of extracellular organic matter and other proteins and polysaccharides . Moreover, high ozone doses are required for the successful inactivation of microorganisms . Lastly, the lifetime of ozone is brief and needs on-site production which is not an option for water systems on Indigenous reserves due to their remote locations . Regarding the limitations of BAC systems, their gradually decreasing absorption capacity over time and low DOC removal should be noted . When dissolved organic matter and other complex compounds are not adequately removed during the water treatment, pathogenic microorganisms can proliferate inside the water distribution system . Without proper maintenance, this method represents a health risk for the consumers in addition to poor water quality . • Ion exchange (IEX). IEX is an electrochemical method where ions from the water body are swapped with ions within resins that contain active centers with acidic or basic groups (electrically charged) . The most common materials used for IEX resins include methacrylic acid, sulfonated styrene, and divinyl benzene (DVB) . Cation exchangers carry sulfate or carboxyl groups and use sodium, potassium, and hydrogen as counterions . Whereas anionic ion exchange resins contain quaternary ammonium groups with chlorine as a counterion . During water treatment for DOC removal, anionic IEX takes place when the negatively charged dissolved organic matter in waters with pH values ranging from 6 to 8 (which is the case of most drinking water sources) has a higher attraction for the ion exchange resin than the ion being traded . The removal of anions releases a Cl − ion from the anionic resin . Anionic IEX can eliminate up to 90 percent of DOC . Additionally, the cost of a traditional IEX system fluctuates around $0.1–$0.2 per 1,000 liters of purified water, which is significantly lower than other membrane-based methods . Removing DOC from water through IEX may be considered a viable, efficient, and affordable alternative for rural systems as it does not require continuous operational capacity or personnel to operate other treatment processes. The drawbacks of IEX implementations include the saturations of resins with chloride counter-ions after 3 to 8 weeks of performance (depending on DOC levels, conditions of operation, and IEX capacity), which requires frequent regeneration . The regenerations produce elevated concentrations of sodium chloride and NOM that demand careful disposal since it can negatively affect the aquatic ecosystem and the plumbing structure of the area if disposed directly into the sewer sheds . Thus, the constant transportation of regenerants (10–12% NaCl solution) represents a clear disadvantage for remote Indigenous Communities . In IEX, the favorable conditions of the macro-porous anatomy of the resin and the presence of high sources of carbon, benefit biofilm development in the membrane in the absence of resin regeneration . The establishment of biofilm was considered undesirable, nevertheless, several recent studies have demonstrated that biological activity found in the IEX resins contribute to the removal of NOM . This high-performance method, now called Biological Ion Exchange (BIEX) enables the saturated membrane of IEX to extend its lifetime without filter maintenance, minimal waste discharge, and low operational cost . BIEX has recently been applied in the drinking water system used by the Middle River community located in central British Columbia. This Indigenous reserve, which is part of the Tl’azt’en Nation, has been under drinking water advisory for over a decade . The implementation of BIEX in this community resulted in high DOC removal, with no need for replacement of the filter in more than 12 months . Remarkably, after the implementation of BIEX in this community, the drinking water advisory was lifted . Although there is not yet a consensus reached regarding the percentage of DOC removed by the biological activity of the resins used in this method, strong evidence suggests that biodegradation may significantly influence DOC levels . This method can be applied in complementation with UV disinfection, an advanced oxidation process, and chlorine for NOM elimination . Further research is, however, required to elucidate the dynamics of biofiltration in NOM removal through IEX. It is critical to identify accessible technologies capable of removing NOM and treating source water in DWS of Indigenous communities, particularly for those under long-term drinking water advisories. This will ensure compliance with water quality standards and safeguard the health of all users. Although information on DWS in Indigenous communities worldwide is limited, reports indicate that many of these communities’ face challenges similar to those in Canada. Using untreated groundwater, storage, and transportation facilities such as cisterns and water trucks seems to be a common denominator. In the USA, reports revealed that approximately 12% of Native Americans including Alaska Native communities do not have basic sanitation facilities . In the Navajo Nation, one of the largest Indian reservations in this country, studies evidenced that the DWS had a very basic infrastructure consisting of wells, pressure, and distribution tanks operated by the owners. In this Nation, around 30% lacked piped water . A high number of “tribal facilities” exceeded the safe limits established by the US EPA drinking water regulations for fecal bacteria and heavy metals including arsenic and uranium . In Australia, most Indigenous people rely on untreated groundwater that has not met the recommended limits for chemical and microbiological parameters by the Australian drinking water guidelines . In Greenland (Denmark), where more than 80% of the population is Indigenous , source water is treated with sand filters, UV and/or chlorination. To transport water to houses without piped water (approximately 10%) water trucks bring water inside households’ storage units. Another alternative is the existence of tap houses where people fetch water . Although records are limited or absent in small DWS in Greenland, some reports have detailed water facilities not meeting the microbiological European Union drinking water regulations (EU DWR), and BWA are frequent . Within the same context, Indigenous populations such as the Sámi people who are mostly distributed (∼95%) in Northern parts of Norway, Sweden, and Finland face challenges related to water and land . Although specific reports on the DWS infrastructure used in the communities of the Sámi people are not available, most small DWS in these Nordic countries use groundwater as their main water source . In Finland and Norway for example, small DWS typically have dug wells or boreholes. Generally, water from these DWS does not go under any treatment before distribution unless the source water has been demonstrated not to be “well protected” . While the water quality has been cataloged to meet the minimum standards of the EU DWR, there are reports indicating water fecal contamination with E. coli (which doesn’t overrule the presence of other pathogens) in DWS of both countries . In small DWS of Sweden, untreated groundwater is also distributed unless high levels of iron and manganese are found. If water treatment is required, water is aerated, filtered with rapid sand filtration, and disinfected. Sometimes, groundwater also goes through activated carbon filters . The monitoring of water supplies has been reported to be either absent or insufficient in Greenland (Denmark) and Sweden, respectively. The data reveal the vulnerability and ineffectiveness of the DWS that Indigenous communities face around the world. One of the main goals of drinking water treatments in Canada is to remove NOM since its presence is problematic because of the potential formation of DBP , contribution to biofilm growth , and high operational costs as previously reviewed. The use of biological-activated methods in drinking water treatment plants has proven to have a positive impact on the degradation of a fraction of NOM . Even though some micropollutants need specialized electrochemical degradation , the effective depletion of contaminants including pharmaceuticals, personal care products, endocrine disrupting compounds, arsenic, manganese, and ammonium, has been well reported . In this context, biofilters assist in the reduction of membrane fouling, color and odor constituents, disinfectant doses, and DBP precursors . A better understanding of the physiological, metabolic, biochemical, and ecological characteristics of the microbial networks found in DWS can be achieved by the identification of their genome . Understanding the growth conditions, symbiotic relationships, pathogenicity risks, and habitat preferences of the microorganisms present in DWS and raw water is critical for making water safe. The aforementioned factors also provide insights to alter the conditions surrounding the microbiota and modulate the development of biofilms . Regulating the microbial communities in biologically activated filters used in DWS could lead to the development of advanced approaches that positively impact various aspects of water safety. For instance, monitoring the microbiome in both water filters and source water could facilitate the effective prevention of high-risk pathogens . In addition, when using BAF, efforts should be made to provide an ideal working environment for the microorganisms contributing to the degradation of water contaminants . These efforts will allow the recognition of the correlation between microbial community structure and biofilter function . The customization of the microbial community present in BAF according to the definitions proposed by includes 3 approaches: • Bioaugmentation: with the inoculation of pivotal endogenous or exogenous microorganisms from an enriched source to the one of interest to accelerate the normal microbial establishment process and fasten biodegradation . • Amendment: that refers to the adjustment of exogenous compounds required for biological activity such as nutrients and oxidants ; and • Supplementation: of endogenous substances to achieve higher amounts than the ones contained naturally within the filters . Although not considered in most operative water systems using BAF, the enhancement of biological activities has demonstrated a significant improvement in different aspects of the water filtration process . For example, in a bioaugmentation experiment conducted by , enriched nitrifying bacteria from an operational sand filter was transferred to a novel filter, resulting in an acceleration of the development of key nitrifiers in the filters . This approach enables speeding the oxidation of ammonium in groundwater, a process that under normal conditions would take several months (Pinto et al., 2016; ; ). In another study conducted by , the amendment of nutrients and peroxide resulted in filter life extension as well as a breakthrough decrease of DOC and other undesirable components . In contrast, these improvements were not observed with other BAF systems used without this enhancement . Moreover, in research conducted by , it was proposed that supplementation of phosphorus in the biofilters only when P levels are limited helps to ameliorate biofilter hydraulics . In natural conditions, phosphorus is low, due to removal through conventional treatments ( i.e., coagulation/flocculation) . The deficit of P contributes to an increase in the extracellular polymeric substances of the biofilm which has been correlated with bio-clogging and less filter durability . Therefore, more efforts should be considered to determine the microbiome established in the biofilters to increase their durability and secure correct functioning in DWS. Identifying the microbiome present in biological water filters represents the cornerstone for finding putative functions associated with these microorganisms. Thereby, any attempt of amendment, supplementation, and/or bioaugmentation to improve the biodegradation of any organic or inorganic component, should be complemented with the microbial fingerprints present in the biofilter to link activity and symbiosis of the microbiome . Culture-dependent methods have been widely used as the technique of reference for controlling the presence of pathogenic organisms in water. However, these methods do not provide significant information regarding the total microbial diversity and its changes in source water and DWS . Moreover, some commercial kits used for fecal bacteria quantification in water ( i.e., compartment bag tests) have been demonstrated to underestimate the concentration of pathogenic bacteria. , which increases the risks of false negative results. Besides, most bacterial cells present in the water as well as in BAF and DWS are not culturable or are out of the range of detection of commercial tests . To obtain information about the type, abundance, function, pathogenicity, and metabolic requirements of the drinking water microbiome, culture-independent or molecular methods are the approach of choice . These high-throughput methods can provide more detailed information to monitor BAF, enabling targeted modifications that can enhance the performance of these filters in DWS . Moreover, the rapid decline in the cost of these sequencing tools makes them conveniently affordable, even for DWS in Indigenous reserves . Up to date, Metagenomics, which involves studying all nucleic acids from a specific sample, has become the preferred method for microbiome analysis, especially using targeted or amplicon-based metagenomic approaches . One of the main reasons for these preferences is the high level of conservation and hyper-variability of the genomic marker used, which allows for the identification of different species . The multiple sets of DNA sequences identified during high-throughput sequencing can be used to evaluate the taxonomy of the microorganisms present in drinking water and biofilter microbial communities . For instance, deep amplicon sequencing of 16S rRNA and chaperonin 60 or cpn60 (mitochondrial protein) has been used for identifying bacterial communities in source water and DWS . Other microorganisms such as eukaryotes and fungi have also been described in aquatic environments using 18S rRNA and Internal Transcribed Spacer (ITS) . Similarly, characterization of the viruses found in drinking water has been attempted by studying specific viral groups using biomarkers such as RNA-dependent RNA polymerase (RdRp) for RNA virus, gene 23 (g23), and gene 20 (g20) for DNA virus, among others . Despite the virome characterization obtained using the mentioned viral biomarkers, approximately 50% of viral hits remain unknown when searched in public databases . The hassle relies on viruses lacking “universal” gene markers which makes the identification of abundance patterns and community structure for all viruses a challenge . Despite this, several viruses have been associated with human fecal contamination such as Norovirus, Enterovirus, Rotavirus, Hepatovirus A, Pepper mild mottle virus (PMMoV), crAssphage, and human adenovirus (HAdV), among others . Additionally, depending on the source of water, animal-specific enteric viruses can also serve as fecal indicators, such as the case of porcine and bovine adenoviruses’ ((PAdVs) and (BAdVs), respectively) as well as bovine enterovirus (BEV) and Bovine polyomavirus (BPyV) . . summarizes the different viruses and other microorganisms used to assess fecal contamination in aquatic environments. The prevalence of these microorganisms has been identified in multiple water environments , for this reason monitoring source water, DWS and biofilters with this technology can help ensure adequate water quality for all individuals. In addition to targeted metagenomics, another culture-independent approach is shotgun metagenomics, in which the contents of the complete genome are studied . In this approach, long DNA molecules belonging to the microorganisms present in the ecosystem under study, break into random fragments that are sequenced afterward . Shotgun sequencing examines all metagenomic DNA instead of only the hypervariable regions . When the nucleotide detection of target DNA molecules is direct, the efficacy of data analysis increases, and the coverage regions of phylogenetical relevance are extended . The average prices of targeted and shotgun metagenomics can be found in . Even though these tools have the potential to provide important information regarding the microbial populations present in source water, DWS and biofilters, there is still evidence of a vast number of uncategorized, uncharacterized, and unclassified environmental microorganisms . For this reason, complementary methods should be used to fill in the existing taxonomic “blind spots” that separate the field today from vital and novel phylogenetic information. Culturomics is a high-throughput culture approach that can be used to overcome the limitations of unclassified environmental bacteria that metagenomics faces . This technique was originally introduced for the study of microbiota in the human gut . This method combines matrix-assisted laser desorption/ionization-time of flight (MALDI–TOF) mass spectrometry with 16S rRNA sequencing to identify novel, viable bacterial colonies . Its principle of improving culture media with the precise conditions these microorganisms require for their growth can help fill in the gaps of the so-called “uncultivable” bacteria of aquatic environments. In culturomics, diverse adjustments are applied to the incubation conditions (temperature, incubation time, media enrichment, pH, oxygen demand, and so forth) to promote the growth of otherwise uncultivable bacteria . Consecutively, they are cleaned and prepared for the MALDI-TOF method, and if the taxonomic credentials in the database are not found, a supplementary amplification and sequencing using 16S rRNA is conducted . The transcendental results obtained from the human gut microbiome (247 new species of bacteria and their genomes unveiled) make this method a suitable candidate to complement metagenomics for the uncategorized phyla found in source water and DWS . Conversely, the limitations of culturomics lie in the inability to identify species that do not count with any genome registration in libraries of reference . Furthermore, the sample-processing capacity is pointedly lower compared to the volume that metagenomics tools can handle in a single day . Indigenous and remote communities would significantly benefit from fast, transportable, and on-site sensitive methods to recognize bacterial, viral, and protozoan pathogens in DWS. The term microfluidics refers to the process of small (10 −9 L to 10 −18 L) fluids that circulate into micrometer channels with components like microfilters, microvalves, micromixers, and sensors such as detectors at the cellular and molecular level . Generally, the channel size used in the analytical devices employed in microfluidics oscillates between 10 mm to 200 mm and even one mm in some cases . Lab-on-a-chip (LOC) employs a microsystem where the surface, gravitational, viscous, and other forces integrated with either active or passive microvalves are carefully applied to obtain a real and complete micro-laboratory . Active microvalves require external actuation ( i.e., electromagnetism, thermal expansion) while passive valves base their functioning on the pressure gradient . These (passive) microvalves are generally used for micropumps ( i.e., as check-valves) . tested an insolation chip (ichip) composed of more than a thousand miniature diffusion chambers to inoculate microorganisms from diverse environments including aquatic settings . The application of this technique has resulted in a higher recovery compared to traditional cultivation methods alone . Additionally, the species found differed significantly from the ones recovered in Petri dishes, revealing considerable phylogenetic novelty . The specific functioning, types, and components of microfluidics have been broadly studied in different areas and can be revised elsewhere . Generally, there are more than five types of microfluidic platforms including linear actuated devices, microfluidic large-scale integration, centrifugal microfluidics, segmental flow microfluidics, electrokinetics, surface acoustic waves, pressure-driven laminar flow, and lateral flow tests . For instance, centrifugal microfluidics allows the management of more sensitive liquids such as nucleic acids . Some examples of the application of centrifugal microfluidics include DNA extraction and nucleic acid–based assays, protein crystallization and protein-based assays, integrated plasma separation, clinical chemistry assays, and chromatography tests . Similarly, lateral flow tests, have been successfully used to detect infectious agents such as Salmonella spp. , anthrax ( Bacillus anthracis ), viruses, and even small molecules such as antibiotics . Some of the samples used in lateral flow tests include: nucleic acids for B. anthracis , blood serum for Salmonella spp. , nasopharyngeal wash for respiratory syncytial virus Infection (RSV) , milk for tetracycline detection and fecal specimens for Giardia spp. and Cryptosporidium spp. . In lateral flow tests, the capillary-driven process is used to absorb the sample and transport it through all over the test strip. Three types of molecules or antibodies will be present in this mechanism where: • Tagged antibodies for a signal-generating particle in the “conjugate pad” are hydrated with the sample and eventually bound to the antigens contained in the sample. • The sample continues flowing to the incubation and recognition area and meets test line antibodies that bind particles covered with antigens . • On the control line, a third class of antibodies catches the compounds that did not bind with any particle . This latter binding confirms a successful test performance . Likewise, the binding or not binding antibodies in the detection line confirm or deny the presence of the analyte of interest . To the best of our knowledge and based on current literature, these methods have not yet been used to test water microbial quality. In this context, lateral flow tests embody a cost-effective, mobile, and top-notch alternative to identify well-known pathogens in the drinking water systems of Indigenous reserves. The mechanism of action that microfluidics has, could be replicated for the detection of fecal indicator bacteria, viruses, and protozoans in the source water and DWS of isolated areas and Indigenous communities. Nevertheless, the drawbacks of microfluidics include system blockage by small elements, as well as high contamination risks with minimal amounts, and premature absorption of the analytes of interest, among others . Microfluidics platforms represent low-cost, portable, high-precision, time-optimizer tools that would benefit DWS with higher susceptibility to microbiological contamination as is the case of DWS located in remote Indigenous reserves . This review highlights that water insecurity in Canada is a multicomplex event perpetuated by several factors. The first one started with the systematic exclusion of Indigenous peoples to geographically isolated reserves, which came with a lack of planning for policy frameworks and basic infrastructure of drinking water facilities. Moreover, the fragmented responsibilities between the federal, provincial, and territorial governments in Canada along with FN, in the design, construction, operation, and financing of DWS, contribute to this long-lasting issue. The second factor can be associated with the deterioration of water and the environment and the obsolete or non-existent DWS in Indigenous reserves, which are inadequate to address microbiological and chemical contaminants. In Canada some water advisories in Indigenous reserves have not been lifted for almost 30 years . In places where traditional drinking water treatments are not available due to underfunding or lack of support from authorities or source-water quality issues (including high NOM levels), efficient and long-term alternatives should be implemented to permanently lift drinking water advisories. To improve DWS under advisories, comprehensive design studies are required in addition to provisional and permanent repairs in their infrastructure. The most common methods to treat drinking water and to remove NOM are coagulation, high- and low-pressure membrane filtration procedures, ozone, IEX, and its variant BIEX. Every method has its advantages and limitations, however, the method to be implemented should be specific to the source water conditions of the area to be implemented. Furthermore, source water protection, biofilter management, and drinking water monitoring is fundamental to prevent the negative health effects of pathogenic microorganisms. Metagenomics and its drop in prices represent an effective tool to monitor the water microbiome. Additionally, technologies such as culturomics can exceptionally contribute to revealing the up-to-date “unclassified” microbial diversity in DWS. Once identifying the microbial fingerprint in source water and DWS, practical approaches such as lab-on-chip technologies can be implemented for on-site, ultra-fast water microbial quality assessment. Simplifying water quality monitoring will significantly enhance the promotion of health and access to safe water, principally for consumers in remote Indigenous communities. Providing culturally adapted water services must be a priority to ensure appropriate water quality and availability. For that, a dynamic interplay between FN authorities and federal, provincial, and territorial governments is imperative to help prioritize and assign resources to promote water protection and management in Indigenous reserves of Turtle Island.
Evaluation of Antibacterial and Antibiofilm Properties of Phenolics with Coumarin, Naphthoquinone and Pyranone Moieties Against Foodborne Microorganisms
1dc69ef4-50cf-49f3-a8b1-cd7731e5ddf3
11857956
Microbiology[mh]
Procyanidins (PCs) are ubiquitously present in a wide range of foods and crop plants, such as cranberry, cocoa, apple or grape . However, the PC composition varies significantly across different plant species, due to variations in the configuration of main moieties, the types of bonds between units, and the varying substituents in the molecules . Numerous studies have evaluated the antimicrobial activity of plant extracts rich in procyanidins and their impact on the inhibition of urinary tract and oral infections, bacterial adhesion, and biofilm formation . However, most publications only evaluate the antimicrobial potential of crude extracts , and only a few of these studies have established correlations between the structural attributes and their biological activities . The antimicrobial activity of these compounds is mainly based on metabolic disruption, DNA interactions or osmotic imbalance, while their antibiofilm activities include antiadhesive effects but also the modulation of mobility and quorum sensing . PCs can also potentiate the activities of antibiotics and antifungals in diverse ways, exerting a synergistic effect with beta-lactams against Enterobacteriaceae clinical isolates, as well as against extended-spectrum β-lactamases (ESBLs) and metallo-β-lactamases producing E. coli , as well as against carbapenemases producing human pathogens , which makes them valuable tools in the prevention and treatment of urinary tract infections (UTIs) . We have also previously described desirable antibacterial and antibiofilm activities against foodborne bacteria by A-type procyanidins (PCs), a class of natural products from the condensed tannin family, and two main series of analogs related to them. Thus, once the A-type PC called cinnamtannin B–1 proved to be more effective than procyanidin B–2 (a B-type PC) , we synthesized several A-type analogs to procyanidin A–2, the structurally simplified version of cinnamtannin B–1, finding that the presence of a nitro (NO 2 ) group at carbon 6 afforded higher activities . Then, seven compounds with a NO 2 group at C–6 and a variable number of OH groups on rings B and D, with or without a methyl group at the C-ring, were synthesized and evaluated. Among them, analog I proved to be more active regarding the inhibition of biofilm formation and disruption of preformed biofilms . Taking into account that the presence of an electron-withdrawing group (NO 2 ) on the A–ring seemed to improve the antibacterial and antibiofilm activities, eight additional analogs with chloro and bromo atoms at C–6 of the A-ring were synthesized and evaluated as potential biocides . The results from this study showed that halogenated analogs (like analog II ) were more active than the nitro derivatives previously reported . Continuing this work, we report herein on the evaluation of the antibacterial and antibiofilm properties of 27 compounds related to analogs I and II with coumarin, naphthoquinone and pyranone moieties (instead of phloroglucinol or resorcinol at D-ring) against foodborne microorganisms, in an attempt to advance our knowledge on structure–activity relationships. Among the target bacteria used in this study, the most susceptible strains were Gram-positive strains ( S. aureus and B. cereus strains). Regarding the structure–activity relationships observed, a coumarin moiety seems to favor the antibacterial activity against S. aureus strains, while the naphthoquinone moiety enhances antibacterial effects against B. cereus . Moreover, the replacement of OH groups in the B-ring by methoxy groups impairs antibacterial activity of the compounds against target bacteria, while the presence of Cl or OH groups in the molecules seems to enhance the inhibition of biofilm formation as well as the disruption of preformed biofilms. 2.1. Chemistry Compounds 1 – 24 displayed in were obtained by reaction of flavylium salts with three different π -nucleophiles, such as 4-hydroxy-coumarin, 2-hydroxy-naphthoquinone and several pyranone derivatives . Flavylium salts were synthesized by aldol condensation under acidic conditions between salicylic aldehyde and acetophenone derivatives according to procedures previously used by us . 2.2. Antibacterial Activity As targets for these assays, eight strains from Type-Culture, belonging to genera Staphylococcus , Listeria , Escherichia and Salmonella , as well as twelve strains of our own collection of resistant strains from organic foods, both Gram-positive and Gram-negative ones, were used. According to the preliminary standard agar diffusion assay, which showed eight potential susceptible strains and eleven active compounds , minimal inhibitory concentration (MIC) values for each compound were obtained . These assays showed three susceptible strains ( S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q), with MIC values for various compounds assayed between 25 and 50 μg/mL against them. Analogs 1 , 3 , 6 , 7 , 8 , 9 and 10 showed MIC values of 25 µg/mL against S. aureus CECT 976, while analogs 11 and 12 were able to inhibit the growth of this strain at 50 µg/mL. Thus, the presence of a coumarin moiety seems to favor the antibacterial activity against this strain, as shown in . Lowest MICs against S. aureus CECT 828 were found for analogs 3 , 7 , 9 and 11 (25 µg/mL). On the other hand, analogs 6 , 8 , 10 and 12 showed MICs of 50 µg/mL against these bacteria. The presence of a coumarin unit in the chemical structure seems to enhance again the antibacterial activity against these Gram-positive bacteria . Analogs 3 , 10 and 11 showed MICs of 25 µg/mL against B. cereus UJA 27q. Most of the active analogs against these bacteria have a naphthoquinone moiety in their structure, which seems to increase the antibacterial activity against this Gram-positive bacillus . S. aureus CECT 976 was the most susceptible strain among those studied, as nine analogs were able to inhibit the growth of this strain at a concentration of 25 µg/mL. S. aureus CECT 828 showed similar susceptibility patterns against the same analogs, although the MICs of these compounds were between 25 and 50 µg/mL against this strain. In contrast, B. cereus UJA 27q was the most resistant strain, being susceptible to just three of the analogs tested. 2.3. Antibiofilm Activity 2.3.1. Inhibition of Biofilm Formation Based on the previous results of antibacterial activity, strains S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q along with the nine compounds that showed the best antibacterial activity were selected for analyzing their ability to inhibit the biofilm formation by these target strains . The best results (with more than 90% of biofilm inhibition by target bacteria) were found for compound 10 (97.9% of inhibition against S. aureus CECT 976, at a concentration of 0.1 µg/mL), compound 3 (97.2% of inhibition at 0.01 µg/mL against B. cereus UJA 27q), compound 1 (96.8% of inhibition at 0.1 µg/mL against S. aureus CECT 976), compounds 8 (95.7% at 50 µg/mL) and 10 (94.2% at 25 µg/mL) against S. aureus CECT 828 and finally, compound 11 (93.9% of inhibition at 0.01 µg/mL) against B. cereus UJA 27q. Taking into account their chemical structure, three of these especially active compounds ( 1 , 3 and 8 ) have a coumarin moiety. All of these compounds also have OH groups in their structure, which seems to be of great value to favor the ability to inhibit the biofilm formation by target bacteria. However, the efficacy of the compounds also depends on the target strain, so compounds 1 and 10 are the most active against S. aureus CECT 976, compounds 8 and 10 against S. aureus CECT 828 and compounds 3 and 11 against B. cereus UJA 27q, respectively . A second group of compounds, which showed biofilm inhibition of 65 to 90% on the selected target strains, includes compounds 9 at 10 µg/mL, compound 3 at 10 µg/mL and compound 6 at 25 µg/mL against S. aureus CECT 976 (with percentages of biofilm inhibition of 88.9%, 85.7% and 83.9%, respectively); compound 11 at 0.01 µg/mL against S. aureus CECT 828 (81.2% inhibition) and against S. aureus CECT 976 (77.6% of inhibition); compound 9 at 0.01 µg/mL and compound 7 at 1 µg/mL against S. aureus CECT 828 (71,25% and 69.9% of inhibition, respectively); compounds 8 at 25 µg/mL and 12 at 10 µg/mL against S. aureus CECT 976 (69.6% and 69.5% of inhibition) and at 0.01 µg/mL against S. aureus CECT 828 (68.46% of inhibition) and compound 10 at 0.01 µg/mL against B. cereus UJA 27q (68.2% oh inhibition). Similar results were previously found for analogs with a NO 2 group at the A-ring against B. cereus UJA 27q . Of the eight compounds that form this group, five of them, including the three with highest activity, have the coumarin moiety. Moreover, all of them excluding compound 6 have OH groups in their chemical structure, which increase their antibacterial activity as previously shown. In contrast, compound 6 has a Cl group, which favors the ability to inhibit biofilm formation by target cells. The most susceptible strain in these assays was S. aureus CECT 976, being inhibited in the biofilm formation by six of the assayed compounds ( 9 , 3 , 6 , 11 , 8 and 12 ). Four compounds ( 11 , 9 , 7 , 12 ) were able to inhibit biofilm formation by S. aureus CECT 828 and just one compound ( 10 ) had the same effect on B. cereus UJA 27q, along with the previously studied analogs with a NO 2 group at the A-ring . Finally, less active compounds, with 50 to 65% of inhibition in biofilm formation by the target bacteria, were compound 6 at 10 µg/mL against S. aureus CECT 828 (51.9% of inhibition), compound 7 at 10 µg/mL against S. aureus CECT 976 (50.9% of inhibition) and compound 3 at 1 µg/mL against S. aureus CECT 828 (49.22% of inhibition). All these three compounds have the common coumarin moiety in their chemical structure. In contrast, some compounds, such as 16 and 17 , increase the biofilm formation by S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q. Regarding the structure–activity relationships, compounds with a coumarin moiety show higher activity against S. aureus strains and those with a naphthoquinone moiety seem to be more active against B. cereus . 2.3.2. Disruption of Preformed Biofilms The ability of the analogs in disrupting previously formed biofilms by food pathogens was also evaluated . Strains S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q and the nine analogs with best activity were selected again to analyze the disruption of preformed biofilms. All these compounds were able to disrupt more than 50% of preformed biofilms for at least one of the assayed bacteria. Moreover, concentrations of 0.01 µg/mL or 0.1 µg/mL were enough to disrupt the biofilms, regardless of the assayed strain. The best activity was observed for compounds 10 and 11 , both with a naphthoquinone moiety and with two OH groups in the molecule. Compound 8 at 0.01 µg/mL also stands out, being able to disrupt 86.2% of the preformed biofilm by S. aureus CECT 976, together with compound 11 at 0.01 µg/mL against B. cereus UJA 27q, with disruptions of more than 75% of the preformed biofilms. Compounds 1 – 24 displayed in were obtained by reaction of flavylium salts with three different π -nucleophiles, such as 4-hydroxy-coumarin, 2-hydroxy-naphthoquinone and several pyranone derivatives . Flavylium salts were synthesized by aldol condensation under acidic conditions between salicylic aldehyde and acetophenone derivatives according to procedures previously used by us . As targets for these assays, eight strains from Type-Culture, belonging to genera Staphylococcus , Listeria , Escherichia and Salmonella , as well as twelve strains of our own collection of resistant strains from organic foods, both Gram-positive and Gram-negative ones, were used. According to the preliminary standard agar diffusion assay, which showed eight potential susceptible strains and eleven active compounds , minimal inhibitory concentration (MIC) values for each compound were obtained . These assays showed three susceptible strains ( S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q), with MIC values for various compounds assayed between 25 and 50 μg/mL against them. Analogs 1 , 3 , 6 , 7 , 8 , 9 and 10 showed MIC values of 25 µg/mL against S. aureus CECT 976, while analogs 11 and 12 were able to inhibit the growth of this strain at 50 µg/mL. Thus, the presence of a coumarin moiety seems to favor the antibacterial activity against this strain, as shown in . Lowest MICs against S. aureus CECT 828 were found for analogs 3 , 7 , 9 and 11 (25 µg/mL). On the other hand, analogs 6 , 8 , 10 and 12 showed MICs of 50 µg/mL against these bacteria. The presence of a coumarin unit in the chemical structure seems to enhance again the antibacterial activity against these Gram-positive bacteria . Analogs 3 , 10 and 11 showed MICs of 25 µg/mL against B. cereus UJA 27q. Most of the active analogs against these bacteria have a naphthoquinone moiety in their structure, which seems to increase the antibacterial activity against this Gram-positive bacillus . S. aureus CECT 976 was the most susceptible strain among those studied, as nine analogs were able to inhibit the growth of this strain at a concentration of 25 µg/mL. S. aureus CECT 828 showed similar susceptibility patterns against the same analogs, although the MICs of these compounds were between 25 and 50 µg/mL against this strain. In contrast, B. cereus UJA 27q was the most resistant strain, being susceptible to just three of the analogs tested. 2.3.1. Inhibition of Biofilm Formation Based on the previous results of antibacterial activity, strains S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q along with the nine compounds that showed the best antibacterial activity were selected for analyzing their ability to inhibit the biofilm formation by these target strains . The best results (with more than 90% of biofilm inhibition by target bacteria) were found for compound 10 (97.9% of inhibition against S. aureus CECT 976, at a concentration of 0.1 µg/mL), compound 3 (97.2% of inhibition at 0.01 µg/mL against B. cereus UJA 27q), compound 1 (96.8% of inhibition at 0.1 µg/mL against S. aureus CECT 976), compounds 8 (95.7% at 50 µg/mL) and 10 (94.2% at 25 µg/mL) against S. aureus CECT 828 and finally, compound 11 (93.9% of inhibition at 0.01 µg/mL) against B. cereus UJA 27q. Taking into account their chemical structure, three of these especially active compounds ( 1 , 3 and 8 ) have a coumarin moiety. All of these compounds also have OH groups in their structure, which seems to be of great value to favor the ability to inhibit the biofilm formation by target bacteria. However, the efficacy of the compounds also depends on the target strain, so compounds 1 and 10 are the most active against S. aureus CECT 976, compounds 8 and 10 against S. aureus CECT 828 and compounds 3 and 11 against B. cereus UJA 27q, respectively . A second group of compounds, which showed biofilm inhibition of 65 to 90% on the selected target strains, includes compounds 9 at 10 µg/mL, compound 3 at 10 µg/mL and compound 6 at 25 µg/mL against S. aureus CECT 976 (with percentages of biofilm inhibition of 88.9%, 85.7% and 83.9%, respectively); compound 11 at 0.01 µg/mL against S. aureus CECT 828 (81.2% inhibition) and against S. aureus CECT 976 (77.6% of inhibition); compound 9 at 0.01 µg/mL and compound 7 at 1 µg/mL against S. aureus CECT 828 (71,25% and 69.9% of inhibition, respectively); compounds 8 at 25 µg/mL and 12 at 10 µg/mL against S. aureus CECT 976 (69.6% and 69.5% of inhibition) and at 0.01 µg/mL against S. aureus CECT 828 (68.46% of inhibition) and compound 10 at 0.01 µg/mL against B. cereus UJA 27q (68.2% oh inhibition). Similar results were previously found for analogs with a NO 2 group at the A-ring against B. cereus UJA 27q . Of the eight compounds that form this group, five of them, including the three with highest activity, have the coumarin moiety. Moreover, all of them excluding compound 6 have OH groups in their chemical structure, which increase their antibacterial activity as previously shown. In contrast, compound 6 has a Cl group, which favors the ability to inhibit biofilm formation by target cells. The most susceptible strain in these assays was S. aureus CECT 976, being inhibited in the biofilm formation by six of the assayed compounds ( 9 , 3 , 6 , 11 , 8 and 12 ). Four compounds ( 11 , 9 , 7 , 12 ) were able to inhibit biofilm formation by S. aureus CECT 828 and just one compound ( 10 ) had the same effect on B. cereus UJA 27q, along with the previously studied analogs with a NO 2 group at the A-ring . Finally, less active compounds, with 50 to 65% of inhibition in biofilm formation by the target bacteria, were compound 6 at 10 µg/mL against S. aureus CECT 828 (51.9% of inhibition), compound 7 at 10 µg/mL against S. aureus CECT 976 (50.9% of inhibition) and compound 3 at 1 µg/mL against S. aureus CECT 828 (49.22% of inhibition). All these three compounds have the common coumarin moiety in their chemical structure. In contrast, some compounds, such as 16 and 17 , increase the biofilm formation by S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q. Regarding the structure–activity relationships, compounds with a coumarin moiety show higher activity against S. aureus strains and those with a naphthoquinone moiety seem to be more active against B. cereus . 2.3.2. Disruption of Preformed Biofilms The ability of the analogs in disrupting previously formed biofilms by food pathogens was also evaluated . Strains S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q and the nine analogs with best activity were selected again to analyze the disruption of preformed biofilms. All these compounds were able to disrupt more than 50% of preformed biofilms for at least one of the assayed bacteria. Moreover, concentrations of 0.01 µg/mL or 0.1 µg/mL were enough to disrupt the biofilms, regardless of the assayed strain. The best activity was observed for compounds 10 and 11 , both with a naphthoquinone moiety and with two OH groups in the molecule. Compound 8 at 0.01 µg/mL also stands out, being able to disrupt 86.2% of the preformed biofilm by S. aureus CECT 976, together with compound 11 at 0.01 µg/mL against B. cereus UJA 27q, with disruptions of more than 75% of the preformed biofilms. Based on the previous results of antibacterial activity, strains S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q along with the nine compounds that showed the best antibacterial activity were selected for analyzing their ability to inhibit the biofilm formation by these target strains . The best results (with more than 90% of biofilm inhibition by target bacteria) were found for compound 10 (97.9% of inhibition against S. aureus CECT 976, at a concentration of 0.1 µg/mL), compound 3 (97.2% of inhibition at 0.01 µg/mL against B. cereus UJA 27q), compound 1 (96.8% of inhibition at 0.1 µg/mL against S. aureus CECT 976), compounds 8 (95.7% at 50 µg/mL) and 10 (94.2% at 25 µg/mL) against S. aureus CECT 828 and finally, compound 11 (93.9% of inhibition at 0.01 µg/mL) against B. cereus UJA 27q. Taking into account their chemical structure, three of these especially active compounds ( 1 , 3 and 8 ) have a coumarin moiety. All of these compounds also have OH groups in their structure, which seems to be of great value to favor the ability to inhibit the biofilm formation by target bacteria. However, the efficacy of the compounds also depends on the target strain, so compounds 1 and 10 are the most active against S. aureus CECT 976, compounds 8 and 10 against S. aureus CECT 828 and compounds 3 and 11 against B. cereus UJA 27q, respectively . A second group of compounds, which showed biofilm inhibition of 65 to 90% on the selected target strains, includes compounds 9 at 10 µg/mL, compound 3 at 10 µg/mL and compound 6 at 25 µg/mL against S. aureus CECT 976 (with percentages of biofilm inhibition of 88.9%, 85.7% and 83.9%, respectively); compound 11 at 0.01 µg/mL against S. aureus CECT 828 (81.2% inhibition) and against S. aureus CECT 976 (77.6% of inhibition); compound 9 at 0.01 µg/mL and compound 7 at 1 µg/mL against S. aureus CECT 828 (71,25% and 69.9% of inhibition, respectively); compounds 8 at 25 µg/mL and 12 at 10 µg/mL against S. aureus CECT 976 (69.6% and 69.5% of inhibition) and at 0.01 µg/mL against S. aureus CECT 828 (68.46% of inhibition) and compound 10 at 0.01 µg/mL against B. cereus UJA 27q (68.2% oh inhibition). Similar results were previously found for analogs with a NO 2 group at the A-ring against B. cereus UJA 27q . Of the eight compounds that form this group, five of them, including the three with highest activity, have the coumarin moiety. Moreover, all of them excluding compound 6 have OH groups in their chemical structure, which increase their antibacterial activity as previously shown. In contrast, compound 6 has a Cl group, which favors the ability to inhibit biofilm formation by target cells. The most susceptible strain in these assays was S. aureus CECT 976, being inhibited in the biofilm formation by six of the assayed compounds ( 9 , 3 , 6 , 11 , 8 and 12 ). Four compounds ( 11 , 9 , 7 , 12 ) were able to inhibit biofilm formation by S. aureus CECT 828 and just one compound ( 10 ) had the same effect on B. cereus UJA 27q, along with the previously studied analogs with a NO 2 group at the A-ring . Finally, less active compounds, with 50 to 65% of inhibition in biofilm formation by the target bacteria, were compound 6 at 10 µg/mL against S. aureus CECT 828 (51.9% of inhibition), compound 7 at 10 µg/mL against S. aureus CECT 976 (50.9% of inhibition) and compound 3 at 1 µg/mL against S. aureus CECT 828 (49.22% of inhibition). All these three compounds have the common coumarin moiety in their chemical structure. In contrast, some compounds, such as 16 and 17 , increase the biofilm formation by S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q. Regarding the structure–activity relationships, compounds with a coumarin moiety show higher activity against S. aureus strains and those with a naphthoquinone moiety seem to be more active against B. cereus . The ability of the analogs in disrupting previously formed biofilms by food pathogens was also evaluated . Strains S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q and the nine analogs with best activity were selected again to analyze the disruption of preformed biofilms. All these compounds were able to disrupt more than 50% of preformed biofilms for at least one of the assayed bacteria. Moreover, concentrations of 0.01 µg/mL or 0.1 µg/mL were enough to disrupt the biofilms, regardless of the assayed strain. The best activity was observed for compounds 10 and 11 , both with a naphthoquinone moiety and with two OH groups in the molecule. Compound 8 at 0.01 µg/mL also stands out, being able to disrupt 86.2% of the preformed biofilm by S. aureus CECT 976, together with compound 11 at 0.01 µg/mL against B. cereus UJA 27q, with disruptions of more than 75% of the preformed biofilms. Compared to previous results obtained using analogs with floroglucinol or resorcinol as the D–ring , higher antibacterial activities of these new analogs were detected, although analog IV described in showed similar antibacterial activity against B. cereus UJA 27q compared to analogs 2 and 11 reported here. These results point to the nucleophilic unit used in the synthesis of A–type PC analogs as one of the most important aspects to consider when designing new antibacterials derived from A–type proanthocyanidins. In that sense, coumarin and/or naphthoquinone instead of fluoroglucinol, resorcinol and/or pyranone moieties significantly increase the antibacterial activity of the prepared compounds against these foodborne bacteria. Our synthetic compounds showed higher antibacterial activities compared to natural products, as previously described for chlorinated thymol and carvracol derivatives on S. aureus and P. aeruginosa . Similar results have also been previously described by us with phenolic compounds against these target strains. We tested six analogs of A–type proanthocyanidins that were able to inhibit the growth of twenty-one foodborne bacteria, as well as inhibit the biofilm formation and disrupt preformed biofilms by these target bacteria . The hydroxylation at positions 5 and 7 on the A–ring has been previously described to play an important role in the antibacterial activity of flavonols . Moreover, hydroxylations on the C–ring increased the activity. Therefore, the number of monomeric subunits and the location of B–ring hydroxyl groups of the flavan-3-ol monomer are important factors that define the chemistry and bioactivities of condensed tannins (CTs) . Our results also corroborate these previously established structure–activity relationships, so the presence of OHs at the B–ring and halogenated atoms like Cl at the A–ring provide higher antibacterial activities . Regarding antibiofilm activity of our compounds, various biofilm inhibitory mechanisms have been reported for CT, such as bacterial growth reduction properties, bacterial membrane impairment, and inhibition of the production of an extracellular matrix against P. aeruginosa . Cranberry proanthocyanidins have also showed antibiofilm properties against P. aeruginosa by down-regulating the expression of the citric acid cycle and ATP synthesis proteins in bacterial metabolism , and CT from astringent persimmons showed anti-biofilm activity against intraoral bacteria by reducing the hydrophobicity of bacteria . The proanthocyanidins from highbush blueberry are also able to inhibit biofilm formation by altering the cell membrane integrity . In contrast, the increase in biofilm formation by S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA 27q induced by compounds 16 and 17 is probably due to the capacity of some phenolic compounds to induce partial bacterial lysis and subsequent aggregation and membrane fusion, which may favor biofilm formation . Similar results have been previously detected and a paradoxical effect has been described in phenolic compounds against E. coli with cinnamtannin B–1 and against Candida albicans with proanthocyanidins due to the association of some phenolic compounds to each other when the concentration increases and because of the formation of aggregates with proteins and peptides , which reduces the effective concentration of these phenolic compounds. Changes in exopolysaccharide (EPS) production or motility in both Gram–positive and Gram–negative bacteria, as well as changes in hydrophobicity, may also account for the antibiofilm activities we have found in our analogs, as previously described for some natural and derived compounds . In general, tannin compounds act against bacteria, causing disintegration of bacterial colonies, by interfering with the bacterial cell wall and inhibiting fatty acid biosynthesis pathways . They also act through iron chelation, damage to the cell membrane, inhibition of enzyme activities or interaction with proteins . Interaction of tannins with cell wall synthesis also makes bacteria more susceptible to osmotic lysis and the alteration of the structure of the bacterial membranes may also increase fluidity, enhancing the effect of antibiotics . Proanthocyanidins can also bind to the lipopolysaccharides of Gram–negative bacteria, leading to destabilization of the integrity of the outer membrane. They may also inhibit numerous bacterial enzymes, such as protease, phospholipase, urease, neuraminidase, and collagenase . The expression and activity of the urease gene in Proteus mirabilis has also been described to be inhibited by tannic acid, with subsequent reduction in biofilm formation. Recent research efforts are addressing this gap, suggesting that PCs exert antibiofilm activities either by modulating quorum-sensing (QS) systems or by affecting elements such as the composition of the EPS matrix and bacterial motility , although further studies are necessary to corroborate this hypothesis. In general, our compounds reported here present high antibacterial and antibiofilm activities specifically against three Gram–positive bacteria of great relevance for the food sector, including two strains of S. aureus , which are widely studied as a target of phenolic compounds, mainly because of the high virulence in methicillin–resistant S. aureus (MRSA) and their capacity to cause recurrent and durable infections in humans . However, wider studies including a panel of reference strains of bacteria should be conducted in order to contribute to the active development of new food packaging preventing contamination by foodborne pathogens, lengthening the food shelf life and increasing the options of additives that can be used industrially. 4.1. General Experimental Methods The solvents and reagents used, reactions performed and instrumentation were reported by us in a previous work . In brief, all reactions were performed under inert atmospheric conditions at either room temperature or 50 °C. Reaction progress was monitored using analytical thin-layer chromatography (TLC) on silica gel 60 F254 precoated aluminum sheets (0.25 mm, Merck Chemicals, Darmstadt, Germany), with visualization achieved under ultraviolet light at 254 nm. Purification of the synthesized compounds was carried out via column chromatography (CC) using silica gel 60 (particle size 0.040–0.063 mm, Merck Chemicals, Darmstadt, Germany). Nuclear magnetic resonance (NMR) spectra, including 1 H and 13 C, were recorded on a Bruker Avance 400 spectrometer (Bruker Daltonik GmbH, Bremen, Germany) operating at 400 MHz for 1 H and 100 MHz for 13 C. Deuterated solvents such as methanol (CD 3 OD), chloroform (CDCl 3 ), and dimethyl sulfoxide ((CD 3 ) 2 SO) were used, with a drop of deuterated trifluoroacetic acid (TFA- d ) added for flavylium salts to establish acidic conditions. High-resolution mass spectra (HRMS) were obtained using an Agilent 6520B Quadrupole Time-of-Flight (QTOF) mass spectrometer (Agilent Technologies, Santa Clara, CA, USA). 4.1.1. General Procedure for the Synthesis of Flavylium Salts In a round-bottom flask, a salicylaldehyde derivative (1 mmol), an acetophenone derivative (1 mmol), concentrated H 2 SO 4 (0.3 mL, 5.4 mmol), and acetic acid (HOAc, 1.3 mL) were combined. The mixture was stirred at room temperature overnight, as previously described . Subsequently, diethyl ether (Et 2 O, 20 mL) was gradually added, resulting in the precipitation of a reddish solid. The solid was collected by filtration, thoroughly washed with additional diethyl ether, and dried. The synthesized flavylium salts were consistent with our prior reports, yielding comparable efficiencies, and their structures were confirmed by comparison to previously reported spectral data . 4.1.2. General Procedure for the Synthesis of 2,8-Dioxabicyclo [3.3.1] Nonanes ( 1 – 24 ) In a round-bottom flask, a flavylium salt (0.5 mmol) was reacted with 4-hydroxycoumarin, 2-hydroxy-1,4-naphthoquinone, 4-hydroxy-6-phenyl-5,6-dihydro-2H-pyran-2-one, 6-(4-chlorophenyl)-4-hydroxy-5,6-dihydro-2H-pyran-2-one, or 4-hydroxy-6-(4-methoxyphenyl)-5,6-dihydro-2H-pyran-2-one (0.5 mmol) in absolute methanol (MeOH, 8 mL). The reaction mixture was stirred at 50 °C overnight in an oil bath, as previously detailed . Following the reaction, the solvent was evaporated, and the resulting crude product was purified using column chromatography with silica gel 60 as the stationary phase. The synthesized dioxabicyclic derivatives ( 1 – 24 ) were consistent with our previous reports , and their structures were confirmed by comparison with previously reported spectral data . 4.2. Antibacterial Activity With the aim to estimate the efficacy of the compounds against different foodborne bacteria, their antibacterial and antibiofilm activities were evaluated. The compounds were dissolved in dimethyl sulfoxide (DMSO) (Sigma-Aldrich, Madrid, Spain) and serially diluted for antimicrobial and antibiofilm assays. All experiments were carried out in triplicate. Preliminary studies on the antibacterial activity of the compounds were performed by the standard agar diffusion method as previously described . Next, minimal inhibitory concentration (MIC) values for each compound were obtained by the broth microdilution method, according to the recommendations of the CLSI (2015) . Briefly, serial dilutions of the compounds in tryptic soya broth (TSB) (Scharlab, Barcelona, Spain) were incubated with bacterial suspensions of the target strains (10 5 CFU in TSB) during 24 h, at 37 °C. Then, plates were read on an iMarkMicroplate Reader (Bio-Rad, Madrid, Spain) at OD595 to determine the minimal concentration of each of the compounds tested being able to inhibit bacterial growth. As targets for these assays, both strains from Type-Culture Collections (Spanish; CECT and from the University of Goteborg; CCUG) as well as strains of our own collection from organic foods were used . 4.3. Biofilm Formation Inhibition Assay and Disruption of Preformed Biofilm In order to detect antibiofilm activities of the compounds against the target strains, bacteria were incubated with 10-fold serially diluted purified compounds according to Ulrey et al. . Bacterial suspensions (10 5 CFU in TSB) were incubated with increasing concentrations of each compound, (24 h, 30 °C). Wells with bacterial suspensions and TSB medium were run in parallel as positive controls for biofilm formation. All wells were washed with tap water, and the biofilms fixed with methanol. The plate was stained with 0.3% crystal violet and read on an iMarkMicroplate Reader (Bio-Rad, Madrid, Spain) OD595. The ability of the analogs in disrupting previously formed biofilms by food pathogens may also be of great interest for food industries, so cells were then allowed to form biofilms during 24 h in a subsequent assay, and once the bacteria had formed these structures, diluted compounds were added to the plates in order to detect the remaining biofilm after a second incubation (24 h, 30 °C) by the crystal violet stain method. 4.4. Statistical Analysis The average data and standard deviations from absorbances of antibiofilm assays were determined with the Excel program version 18.0 (Microsoft Corp., Redmond, WA, USA). The statistical significance of the data was evaluated by a t -test that was performed at the 95% confidence level with Statgraphics Plus version 5.1 (Statistical Graphics Corp., Rockville, MD, USA). The solvents and reagents used, reactions performed and instrumentation were reported by us in a previous work . In brief, all reactions were performed under inert atmospheric conditions at either room temperature or 50 °C. Reaction progress was monitored using analytical thin-layer chromatography (TLC) on silica gel 60 F254 precoated aluminum sheets (0.25 mm, Merck Chemicals, Darmstadt, Germany), with visualization achieved under ultraviolet light at 254 nm. Purification of the synthesized compounds was carried out via column chromatography (CC) using silica gel 60 (particle size 0.040–0.063 mm, Merck Chemicals, Darmstadt, Germany). Nuclear magnetic resonance (NMR) spectra, including 1 H and 13 C, were recorded on a Bruker Avance 400 spectrometer (Bruker Daltonik GmbH, Bremen, Germany) operating at 400 MHz for 1 H and 100 MHz for 13 C. Deuterated solvents such as methanol (CD 3 OD), chloroform (CDCl 3 ), and dimethyl sulfoxide ((CD 3 ) 2 SO) were used, with a drop of deuterated trifluoroacetic acid (TFA- d ) added for flavylium salts to establish acidic conditions. High-resolution mass spectra (HRMS) were obtained using an Agilent 6520B Quadrupole Time-of-Flight (QTOF) mass spectrometer (Agilent Technologies, Santa Clara, CA, USA). 4.1.1. General Procedure for the Synthesis of Flavylium Salts In a round-bottom flask, a salicylaldehyde derivative (1 mmol), an acetophenone derivative (1 mmol), concentrated H 2 SO 4 (0.3 mL, 5.4 mmol), and acetic acid (HOAc, 1.3 mL) were combined. The mixture was stirred at room temperature overnight, as previously described . Subsequently, diethyl ether (Et 2 O, 20 mL) was gradually added, resulting in the precipitation of a reddish solid. The solid was collected by filtration, thoroughly washed with additional diethyl ether, and dried. The synthesized flavylium salts were consistent with our prior reports, yielding comparable efficiencies, and their structures were confirmed by comparison to previously reported spectral data . 4.1.2. General Procedure for the Synthesis of 2,8-Dioxabicyclo [3.3.1] Nonanes ( 1 – 24 ) In a round-bottom flask, a flavylium salt (0.5 mmol) was reacted with 4-hydroxycoumarin, 2-hydroxy-1,4-naphthoquinone, 4-hydroxy-6-phenyl-5,6-dihydro-2H-pyran-2-one, 6-(4-chlorophenyl)-4-hydroxy-5,6-dihydro-2H-pyran-2-one, or 4-hydroxy-6-(4-methoxyphenyl)-5,6-dihydro-2H-pyran-2-one (0.5 mmol) in absolute methanol (MeOH, 8 mL). The reaction mixture was stirred at 50 °C overnight in an oil bath, as previously detailed . Following the reaction, the solvent was evaporated, and the resulting crude product was purified using column chromatography with silica gel 60 as the stationary phase. The synthesized dioxabicyclic derivatives ( 1 – 24 ) were consistent with our previous reports , and their structures were confirmed by comparison with previously reported spectral data . In a round-bottom flask, a salicylaldehyde derivative (1 mmol), an acetophenone derivative (1 mmol), concentrated H 2 SO 4 (0.3 mL, 5.4 mmol), and acetic acid (HOAc, 1.3 mL) were combined. The mixture was stirred at room temperature overnight, as previously described . Subsequently, diethyl ether (Et 2 O, 20 mL) was gradually added, resulting in the precipitation of a reddish solid. The solid was collected by filtration, thoroughly washed with additional diethyl ether, and dried. The synthesized flavylium salts were consistent with our prior reports, yielding comparable efficiencies, and their structures were confirmed by comparison to previously reported spectral data . 1 – 24 ) In a round-bottom flask, a flavylium salt (0.5 mmol) was reacted with 4-hydroxycoumarin, 2-hydroxy-1,4-naphthoquinone, 4-hydroxy-6-phenyl-5,6-dihydro-2H-pyran-2-one, 6-(4-chlorophenyl)-4-hydroxy-5,6-dihydro-2H-pyran-2-one, or 4-hydroxy-6-(4-methoxyphenyl)-5,6-dihydro-2H-pyran-2-one (0.5 mmol) in absolute methanol (MeOH, 8 mL). The reaction mixture was stirred at 50 °C overnight in an oil bath, as previously detailed . Following the reaction, the solvent was evaporated, and the resulting crude product was purified using column chromatography with silica gel 60 as the stationary phase. The synthesized dioxabicyclic derivatives ( 1 – 24 ) were consistent with our previous reports , and their structures were confirmed by comparison with previously reported spectral data . With the aim to estimate the efficacy of the compounds against different foodborne bacteria, their antibacterial and antibiofilm activities were evaluated. The compounds were dissolved in dimethyl sulfoxide (DMSO) (Sigma-Aldrich, Madrid, Spain) and serially diluted for antimicrobial and antibiofilm assays. All experiments were carried out in triplicate. Preliminary studies on the antibacterial activity of the compounds were performed by the standard agar diffusion method as previously described . Next, minimal inhibitory concentration (MIC) values for each compound were obtained by the broth microdilution method, according to the recommendations of the CLSI (2015) . Briefly, serial dilutions of the compounds in tryptic soya broth (TSB) (Scharlab, Barcelona, Spain) were incubated with bacterial suspensions of the target strains (10 5 CFU in TSB) during 24 h, at 37 °C. Then, plates were read on an iMarkMicroplate Reader (Bio-Rad, Madrid, Spain) at OD595 to determine the minimal concentration of each of the compounds tested being able to inhibit bacterial growth. As targets for these assays, both strains from Type-Culture Collections (Spanish; CECT and from the University of Goteborg; CCUG) as well as strains of our own collection from organic foods were used . In order to detect antibiofilm activities of the compounds against the target strains, bacteria were incubated with 10-fold serially diluted purified compounds according to Ulrey et al. . Bacterial suspensions (10 5 CFU in TSB) were incubated with increasing concentrations of each compound, (24 h, 30 °C). Wells with bacterial suspensions and TSB medium were run in parallel as positive controls for biofilm formation. All wells were washed with tap water, and the biofilms fixed with methanol. The plate was stained with 0.3% crystal violet and read on an iMarkMicroplate Reader (Bio-Rad, Madrid, Spain) OD595. The ability of the analogs in disrupting previously formed biofilms by food pathogens may also be of great interest for food industries, so cells were then allowed to form biofilms during 24 h in a subsequent assay, and once the bacteria had formed these structures, diluted compounds were added to the plates in order to detect the remaining biofilm after a second incubation (24 h, 30 °C) by the crystal violet stain method. The average data and standard deviations from absorbances of antibiofilm assays were determined with the Excel program version 18.0 (Microsoft Corp., Redmond, WA, USA). The statistical significance of the data was evaluated by a t -test that was performed at the 95% confidence level with Statgraphics Plus version 5.1 (Statistical Graphics Corp., Rockville, MD, USA). S. aureus CECT 976, S. aureus CECT 828 and B. cereus UJA27q were the most susceptible strains with regard to both antibacterial and antibiofilm activities when faced with most of the analyzed compounds. Regarding the structure–activity relationships observed, the coumarin nucleophilic unit seems to favor the antibacterial activity against both S. aureus strains, while a naphthoquinone moiety enhances antibacterial effects against B. cereus UJA27q. Moreover, the replacement of OH groups in the B-ring by methoxy groups (compounds 4 , 5 , 6 , 13 , 14 , 15 and 19a to 24b ) impairs the antibacterial activity of the compounds against target bacteria, while the presence of Cl or OH groups in the molecules seems to enhance the inhibition of biofilm formation as well as the disruption of preformed biofilms.
Preoperative evaluation of the segmental artery of left upper lobe by thin-section CT and 3d-CTA
2b21628d-8383-48e6-8006-7fb4ecacc6e8
11790732
Surgical Procedures, Operative[mh]
Knowledge of pulmonary artery (PA) branching patterns is crucial for thoracic surgeons due to potential bleeding complications during pulmonary resection. Anatomic PA branching pattern variations in the left upper lobe are more common than the right, complicating lung resection, particularly when interlobar fissure separation is incomplete. Hence, preoperative identification of PA branches in the left upper lobe is essential for ensuring patient safety and facilitating lung resection . Recent studies have demonstrated the utility of segmentectomy for early-stage ground-glass opacity predominant lung cancer and a diameter of ≤ 2 cm, leading to an increased adoption of this approach . This underscores the need for detailed information on PA branching patterns at segmental and peripheral levels. Although preoperative evaluation using three-dimensional computed tomography (CT) angiography (3D-CTA) has shown utility , investigations involving a substantial number of cases of segmental PA branching patterns of the left upper lobe using thin-section CT images and 3D-CTA remain limited. Moreover, there is still a lack of substantial comparisons between preoperative imaging and intraoperative findings to support its utility in the left upper lobe . This study aims to evaluate segmental PA branching patterns of the left upper lobe using thin-section CT images and 3D-CTA in order to provide insights into the understanding of PA branches and their variations essential for successful anatomical lung resections. Patients The Ethics Committee of our hospital approved this retrospective study and waived the need for obtaining individual patient consent. A total of 132 consecutive patients with suspected left upper lobe lung cancer who had undergone pulmonary angiography using multidetector row CT (MDCT) and left upper lobectomy between August 2012 and March 2019 were retrospectively reviewed. After excluding 24 patients who had inadequate investigation of tumor-involved hilar structures and technical problems, the final cohort comprised 108 patients (59 men and 49 women; mean age, 69.0 years; age range, 14–85 years) (Fig. ). Some data from 99 of these patients were previously utilized in another study . Contrast-enhanced MDCT Both 64-slice MDCT (Aquilion 64, Toshiba Medical Systems, Tokyo, Japan) and 256-slice MDCT (Brilliance iCT, Philips Healthcare, Cleveland, OH, USA) scanners were used. The technical parameters for the 64- vs. 256-slice MDCT, respectively, were as follows: a detector row configuration of 0.5 mm vs. 0.625 mm, a pitch of 53 vs. 106 (detector pitch of 0.83 vs. 0.83), a reconstruction increment of 0.4 mm vs. 0.5 mm, and a section thickness of 0.5 mm vs. 0.67 mm. An x-ray tube voltage of 120 kV and an automatic exposure control for tube current were used in all examinations. The examinations were performed with the patient in the supine position during a single breath hold at end-inspiration. A dual-head power injector (Dual Shot GX, Nemoto Kyorindo, Tokyo, Japan) was used for all patients for the bolus administration of the contrast material iohexol (Omnipaque 350, GE Healthcare Pharma, Tokyo, Japan) or iopamidol (Iopamiron 300, Bayer Yakuhin, Osaka, Japan) via a cubital vein. In patients weighing ≥ 55 kg, 100 mL of iohexol (350 mg iodine/mL) was injected at a rate of 3.3 mL/s, and scanning was performed 18 s afterwards. In patients with a body weight of 44–55 kg, 85 mL of iohexol (350 mg iodine/mL) was injected at a rate of 3.3 mL/s, and scanning was performed 15 s later. In patients weighing < 44 kg, 85 mL of iopamidol (300 mg iodine/mL) was injected at a rate of 3.3 mL/s, and scanning was performed 15 s later. Another protocol for PA and pulmonary vein (PV) separation images was determined from the time-density curve using a test bolus dose. The injection rate was 4 mL/s, with a 20-mL test bolus injected prior to the main injection. The test injection determined the adequate timing for the PA/PV scan. The PA/PV scan was performed with 50 mL of iohexol (350 mg iodine/mL). The saline chaser was 40 mL, and the injection rate was 4 mL/s. These two protocols were comparable to investigating the PA branching pattern in detail. The volume data obtained from the arterial phase were transferred to a workstation (Zio STATION, Ziosoft, Tokyo, Japan), where the data were converted to a 3D-CTA format using the volume-rendering technique. Image analysis Thin-section transverse images were reviewed at a width of 1600 HU and level -200 HU window settings with paging on a viewer (EV insite, PSP Corporation, Tokyo, Japan). The 3D-CTA images were interpreted by rotating the same viewer. The window, level, and opacity of the volumes were subjectively selected to optimize PA visualization. In the present study, the number and origin of PA branches in the left upper lobe were meticulously identified using 3D-CTA and thin-section images on the same viewer. These images were reviewed with an interval of several days between interpretations. Two board-certified thoracic radiologists, with 12 and 22 years of experience, respectively, independently reviewed each CT image. In cases of discrepancy over branching, the images were re-evaluated with both 3D-CTA and thin-section images until a consensus was reached to avoid interobserver variability. The intraoperative findings of the PA branches of the left upper lobe were compared with the preoperatively obtained 3D-CTA and thin-section images in each patient’s case. The nomenclature used to describe the segmental PA is that of Yamashita . The branches to the left upper lobe arise from the anterior, posterosuperior, and interlobar portions of the vessel: A1 + 2, A3, A4, and A5. The lingular artery (A4 and A5) originates from the interlobar portion (pars interlobaris [PI]) of the left PA (LPA) and may arise from the anterior portion of the mediastinal part of the left arterial trunk (pars mediastinalis [PM]). Moreover, the lingular arteries of PI may sometimes originate from the lower portion, from A8 or the common trunk of A8 and A9 . In the present study, the lingular arteries are identified separately as PI and PI originated from the lower portion (from A8 or the common trunk of A8 and A9, denoted as PI’) . Statistical analysis Statistical analyses were performed in SPSS version 26 (IBM Corp., Armonk, NY). Inter-observer agreement regarding the PA branching pattern in the left upper lobe was analyzed by calculating Cohen’s kappa coefficient before reaching a consensus with the thoracic radiologist who reviewed the images. The inter-observer agreement was classified as follows: excellent (κ = 0.81–1.00), substantial (κ = 0.61–0.80), moderate (κ = 0.41–0.60), fair (κ = 0.21–0.40), and poor (κ = 0–0.20). The Ethics Committee of our hospital approved this retrospective study and waived the need for obtaining individual patient consent. A total of 132 consecutive patients with suspected left upper lobe lung cancer who had undergone pulmonary angiography using multidetector row CT (MDCT) and left upper lobectomy between August 2012 and March 2019 were retrospectively reviewed. After excluding 24 patients who had inadequate investigation of tumor-involved hilar structures and technical problems, the final cohort comprised 108 patients (59 men and 49 women; mean age, 69.0 years; age range, 14–85 years) (Fig. ). Some data from 99 of these patients were previously utilized in another study . Both 64-slice MDCT (Aquilion 64, Toshiba Medical Systems, Tokyo, Japan) and 256-slice MDCT (Brilliance iCT, Philips Healthcare, Cleveland, OH, USA) scanners were used. The technical parameters for the 64- vs. 256-slice MDCT, respectively, were as follows: a detector row configuration of 0.5 mm vs. 0.625 mm, a pitch of 53 vs. 106 (detector pitch of 0.83 vs. 0.83), a reconstruction increment of 0.4 mm vs. 0.5 mm, and a section thickness of 0.5 mm vs. 0.67 mm. An x-ray tube voltage of 120 kV and an automatic exposure control for tube current were used in all examinations. The examinations were performed with the patient in the supine position during a single breath hold at end-inspiration. A dual-head power injector (Dual Shot GX, Nemoto Kyorindo, Tokyo, Japan) was used for all patients for the bolus administration of the contrast material iohexol (Omnipaque 350, GE Healthcare Pharma, Tokyo, Japan) or iopamidol (Iopamiron 300, Bayer Yakuhin, Osaka, Japan) via a cubital vein. In patients weighing ≥ 55 kg, 100 mL of iohexol (350 mg iodine/mL) was injected at a rate of 3.3 mL/s, and scanning was performed 18 s afterwards. In patients with a body weight of 44–55 kg, 85 mL of iohexol (350 mg iodine/mL) was injected at a rate of 3.3 mL/s, and scanning was performed 15 s later. In patients weighing < 44 kg, 85 mL of iopamidol (300 mg iodine/mL) was injected at a rate of 3.3 mL/s, and scanning was performed 15 s later. Another protocol for PA and pulmonary vein (PV) separation images was determined from the time-density curve using a test bolus dose. The injection rate was 4 mL/s, with a 20-mL test bolus injected prior to the main injection. The test injection determined the adequate timing for the PA/PV scan. The PA/PV scan was performed with 50 mL of iohexol (350 mg iodine/mL). The saline chaser was 40 mL, and the injection rate was 4 mL/s. These two protocols were comparable to investigating the PA branching pattern in detail. The volume data obtained from the arterial phase were transferred to a workstation (Zio STATION, Ziosoft, Tokyo, Japan), where the data were converted to a 3D-CTA format using the volume-rendering technique. Thin-section transverse images were reviewed at a width of 1600 HU and level -200 HU window settings with paging on a viewer (EV insite, PSP Corporation, Tokyo, Japan). The 3D-CTA images were interpreted by rotating the same viewer. The window, level, and opacity of the volumes were subjectively selected to optimize PA visualization. In the present study, the number and origin of PA branches in the left upper lobe were meticulously identified using 3D-CTA and thin-section images on the same viewer. These images were reviewed with an interval of several days between interpretations. Two board-certified thoracic radiologists, with 12 and 22 years of experience, respectively, independently reviewed each CT image. In cases of discrepancy over branching, the images were re-evaluated with both 3D-CTA and thin-section images until a consensus was reached to avoid interobserver variability. The intraoperative findings of the PA branches of the left upper lobe were compared with the preoperatively obtained 3D-CTA and thin-section images in each patient’s case. The nomenclature used to describe the segmental PA is that of Yamashita . The branches to the left upper lobe arise from the anterior, posterosuperior, and interlobar portions of the vessel: A1 + 2, A3, A4, and A5. The lingular artery (A4 and A5) originates from the interlobar portion (pars interlobaris [PI]) of the left PA (LPA) and may arise from the anterior portion of the mediastinal part of the left arterial trunk (pars mediastinalis [PM]). Moreover, the lingular arteries of PI may sometimes originate from the lower portion, from A8 or the common trunk of A8 and A9 . In the present study, the lingular arteries are identified separately as PI and PI originated from the lower portion (from A8 or the common trunk of A8 and A9, denoted as PI’) . Statistical analyses were performed in SPSS version 26 (IBM Corp., Armonk, NY). Inter-observer agreement regarding the PA branching pattern in the left upper lobe was analyzed by calculating Cohen’s kappa coefficient before reaching a consensus with the thoracic radiologist who reviewed the images. The inter-observer agreement was classified as follows: excellent (κ = 0.81–1.00), substantial (κ = 0.61–0.80), moderate (κ = 0.41–0.60), fair (κ = 0.21–0.40), and poor (κ = 0–0.20). The median (range) number of branches of the PA of the left upper lobe, upper division segment, and lingular segment was 4.36 (3–8), 2.62 (1–3), and 1.74 (1–4), respectively. Tables , , and show the branching patterns of the PA of the left upper lobe according to the 3D-CTA and thin-section images. The number of branches of A1 + 2 was two in 34 cases and was most frequent at 31.5% (34/108). However, splits (24.1%), meaning separate branching of subsegmental arteries from both A1 + 2 and A3, or even four branches (12.0%), were observed. For the branching pattern of A1 + 2c, two branches were found in 25 cases (23.1%), one or more branches of A1 + 2c directly originated from the LPA in 63 cases (58.3%), and two branches of A1 + 2c directly originated from the LPA and PI in 4 cases (3.7%). The number of branches of A3 was single in 85 cases, making it the most frequent at 85 (78.7%). The relationship between A1 + 2 and A3 origins was that they originated separately and independently in 40 cases (37.0%). For A3a, two branches were found in 8 cases (7.4%), one or more branches of A3a directly originated from the LPA of the interlobar portion in seven cases (6.5%), and a branch of A3a directly originated from PI in 6 cases (5.6%). As for the branching pattern of the lingular segment, PI (including PI’) was the most frequent at 61.1% (66/108). PI’ was observed in 26 cases (24.1%) (Fig. ). “Other” was a case with branches originating from A7 in addition to the PI. The inter-observer agreement for the branching pattern of A1 + 2, A1 + 2c, A3, A3a, and the lingular segmental artery, as well as the number of PI’, was moderate (κ = 0.59), substantial (κ = 0.69), moderate (κ = 0.56), moderate (κ = 0.53), substantial (κ = 0.72), and moderate (κ = 0.53), respectively. Preoperative 3D-CTA and thin-section CT images identified 99.8% of LPA branches compared to the intraoperative findings, with only one exception. Recent studies have demonstrated the utility of segmentectomy for early-stage ground-glass opacity predominant lung cancer and a diameter of ≤ 2 cm, leading to an increased adoption of this approach . This underscores the importance of detailed information on PA branching patterns at the segmental and subsegmental levels. In this study, the PA branching pattern of the left upper lobe before lobectomy was evaluated using 3D-CTA images and thin-section images and compared with intraoperative findings. The number of branches of the left upper lobe PA ranged from 3 to 8, consistent with previous investigations . Similar to Yamashita’s study on the bronchovascular anatomy of 165 specimens , it was observed that A1 + 2 often has multiple stems (Yamashita: 89.1%; present study: 88.9%), while A3 commonly arises as a single stem (Yamashita: 77.0%; present study: 78.7%). A1 + 2c is known to arise directly from the PA, as observed in 54.6% in Yamashita’s study and 58.3% in the present study. When the branching pattern of A3 is other than a single stem, A3a may unexpectedly originate from various locations, including branching from the PI and directly from the LPA. Cases with only PM branching of the lingular artery are infrequent, as demonstrated in the present study (4.6%). Furthermore, the lingular artery often branches from below the bifurcated A8, termed PI’, in approximately a quarter of cases. The presence of this branching is considered important not only during upper or lower lobe resections but also during segmental resections . However, PI’ branches off from the caudal peripheral side of the LPA, making it frequently thinner than PI. Since the thin branches are not depicted by 3D-CTA and are seen only on thin-section CT, the presence of PI’ should be known and carefully checked for on thin-section CT (Fig. ). Several reports have demonstrated the depiction of PA branches using 3D-CTA compared with intraoperative findings. For instance, previous studies have demonstrated that 95–99.7% of PA branches can be identified using 3D-CTA and thin-section CT images . However, some fine branches cannot be depicted using 3D-CTA alone, making confirmation with thin-section CT essential. The use of contrast agents allows for creating high-quality 3D-CTA images of PAs. While 3D-CTA is crucial for surgeons to visualize PA branching patterns during preoperative simulation, it is necessary to distinguish, in detail, the PA branching patterns at the subsegmental level. For instance, in cases like A1 + 2c or A3a, two branches may exist or originate from the PI, making it essential to assess PA branching in relation to subsegmental and more peripheral bronchial branching patterns. Therefore, it is indispensable to use both 3D-CTA and thin-section CT images for such assessments. This approach also improves interobserver agreement. In this study, we used both 3D-CTA and thin-section CT for evaluation; however, we did not compare the findings of individual methods, namely thin-section CT or 3D-CTA, with intraoperative findings nor did we assess interobserver agreement. This is due to the aforementioned reasons, as well as the fact that the depiction of branches can easily change depending on the rendering method in 3D imaging. Additionally, with increased reading experience, it becomes possible to recognize a potential branch in a subtle elevation that might initially appear as an absence PA branching. Therefore, we believe it is essential to use both imaging modalities for thorough evaluation. In this study, we examined the concordance rate for PA branching patterns of the left upper lobe, obtaining substantial and moderate results from two radiologists. Discrepancies were primarily due to differences in how the segments were delineated, with some cases where less experienced readers missed thin arteries. It is crucial to consider the possibility of classification differences due to evaluator variability. Our study is the first to investigate interobserver agreement on PA branching using 3D-CTA and thin-section CT. This highlights the need to explore methods to improve preoperative evaluation consistency among clinicians—such as thoracic surgeons and radiologists—in future studies, ensuring that even less experienced physicians can better recognize thin branches. Similar to reports comparing 3D-CTA and thin-section CT with intraoperative findings on the PA branching patterns of the anatomically variable right upper lobe, where 99.7% of PA branches were identified , there have been no reports with a substantial number of cases for the similarly variable left upper lobe. This study is the first to compare 3D-CTA and thin-section CT with intraoperative findings in the left upper lobe and achieved a 99.8% detection rate of PA branches. On the left side, there were concerns about artifacts due to cardiac motion potentially obscuring fine branches of the lingular segment. However, the comparison with intraoperative findings demonstrated depiction capabilities equivalent to those on the right upper lobe, alleviating these concerns. This study shows that 3D-CTA is sufficient for preoperative evaluation even in the left upper lobe, demonstrating comparable efficacy to intraoperative findings. The present study had some limitations. First, its retrospective nature introduced a potential selection bias. Second, both 64-slice and 256-slice MDCT scanners were utilized. We analyzed the PA branching pattern of the left upper lobe using 3D-CTA and thin-section CT images with two different CTA protocols. However, we believe that these differences did not impact the results, as the CT images were adequate for investigating the PA branching patterns. Finally, although we compared 3D-CTA and thin-section CT with intraoperative findings, this study focused solely on the branching patterns from the LPA in left upper lobe resections. However, the high concordance rate observed with intraoperative findings for the LPA suggests that the branching patterns identified by CT are likely to reflect the actual anatomy with high accuracy. To further advance segmentectomy and subsegmentectomy techniques, future research should investigate the more peripheral PA branches. In conclusion, 3D-CTA and thin-section images provided precise preoperative information regarding the subsegmental PA branching pattern in the left upper lobe. Despite initial concerns, imaging of the left upper lobe was comparable to intraoperative findings seen in a previous study on the right upper lobe. It is important to utilize thin-section CT alongside 3D-CTA for accurate identification of smaller branches like PI’. These findings support the utilization of 3D-CTA and thin-section CT for preoperative evaluations in left upper lobe surgeries, contributing to the safety and ease of lobar or segmental left upper lobe resection.
The influence of parents’ oral health literacy and behavior on oral health of preschool children aged 3–6 years- evidence from China
86593834-b78d-47c8-8e6a-98ca49670ca5
11603917
Dentistry[mh]
Dental caries and gum disease are common oral diseases in preschool children aged 3 to 6 years . They are influenced by several factors such as the children, their families, and society . Of them, the self-related factors primarily include brushing teeth, sugar intake, and other related behaviors, whereas the family-related social factors primarily include cognition, attitude, and behavior among parents and socio-economic conditions . Oral health in children has been strongly associated with their health behaviors. However, preschool children are in the growth and development period; therefore, numerous health behaviors belong to the learning stage . Additionally, their physiological conditions prevent self-care, thus necessitating their parents to provide daily care, diet, living habits, environment, and other supports . Therefore, the oral health status among preschool children is predominantly affected by their parents. Based on the Knowledge Attitude Behavior model, parents' health knowledge directly affects children’s attitudes and indirectly affects their behaviors. For example, parents' self-efficacy perception with twice-daily brushing has been associated with twice-daily brushing in children aged 3 to 4 years . According to the Health Beliefs model, the oral health status among children can be determined by the beliefs of their guardians . Parents' oral health behaviors, knowledge, and attitudes can directly affect the number of caries and the level of gum health among children by influencing their oral health behaviors . Improved health literacy among parents not only affects self-reported oral health-related behaviors favorably but also improves children's oral health, such as gum health and dental caries . Health literacy, health behavior, and health status influence each other directly and indirectly . This study provides evidence from the Chinese community by describing the association between parents' oral health literacy and behavior and their children's gum health and dental caries through oral health behavior management. Data sources This study was conducted across five primary schools in Chengdu, Sichuan Province, China. A total of 1,102 preschool children aged 3 to 6 years underwent professional oral examinations by four dentists aided by community family doctors and school teachers. Dental examinations were conducted to determine dental caries, based on the methods and standards described by the World Health Organization. The soft scale index can be assessed on a specific area (e.g. front or back teeth) and tooth surface (e.g. buccal or lingual). Dental examinations were conducted in natural light at the respective schools using a flat mouth mirror and periodontal probe, collecting data on tooth decay and gum health. Our research was conducted with the informed consent of the child's parents or legal guardians. The children's parents participated in questionnaire-based surveys, which collected information about their oral health literacy, health behaviors, and children’s health behaviors. Ethics approval and consent to participate For this research, informed consent was obtained from the parents or legal guardians of all children aged under 16 years. In addition, informed consent to participate form was signed by each participant in the study. Inclusion and exclusion criteria The inclusion criteria for children were as follows: (1) aged between 3 and 6 years; (2) with clear consciousness and can follow basic instructions; (3) without serious systemic diseases or acute dental problems; and (4) can cooperate during an oral examination. The inclusion criteria for parents were as follows: (1) served as the child's caregiver, familiar with their living habits; (2) willing to participate in the questionnaire survey and cooperate with the staff for its completion; and (3) can understand and respond to the survey questions. The exclusion criterion was a situation where the parents and child did not live together. Questionnaire design The questionnaire primarily included the following three sections: (i) children's oral health behavior, (ii) parents’ intervention in children's oral health behavior, and (iii) parents’ oral health literacy. The questionnaire on common oral health behaviors among parents and children was summarized and evaluated by the literature method. Simultaneously, the responses were sorted into binary variables based on the research purpose. As questions with yes or no answers are typically non-scale questions, direct reliability and validity analysis is not feasible. Therefore, our research referred to expert opinions, conducted a pre-investigation within a small range, and finally developed a formal questionnaire to ensure its reliability. Questions about parents’ oral health literacy were primarily developed from the National Oral Epidemiological Survey of China, which has good reliability and validity. Concurrently, to ensure data reliability, school teachers underwent relevant training. Additionally, stomatologists and school teachers assisted the parents in completing the on-site investigation and unified recovery. All the contents of the questionnaire are attached in the last part of the paper. Children's oral health status (independent variable) Dental caries and tartar among the children were the independent variables. Based on the decay-missing-filled teeth (DMFT) index, teeth or cary-filled teeth are recorded as caries DT, missing teeth because of any reason are classified as missing MT, and the total number of filled teeth without secondary caries is classified as filled FT, DT, MT and FT. We categorized the DMFT into dichotomous variations, with no caries recorded as 0 and one or more teeth recorded as 1. Plaque index (PI) was used to describe the periodontal status. PI is a qualitative index to evaluate plaque and its degree. It is scored on a scale of 0 to 3. Additionally, PI is sorted into binary variables, where the value without soft scale is recorded as 0 and higher is recorded as 1. Parents' oral health literacy and behavior (dependent/instrumental/mediating variables) Parents' health behaviors primarily included the following questions: 1) Do you help your child brush his teeth? (Behavior 1). 2) Does provide a fluoride toothpaste? (Behavior 2). 3) Does teach your child to floss? (Behavior 3). 4) Does develop a habit of rinsing your child's mouth with water after meals? (Behavior 4). 5) Does take your child to the dentist regularly? (Behavior 5). All the behavioral variables of were dichotomous and conformed to the Bernoulli distribution. Health literacy was primarily assessed based on the following questions: 1) Is it important to protect children's teeth at 6 years of age?; 2) The bag day must be corrected, and will be inherited?; 3) Can untreated bad baby teeth become good later?; 4) Can cavity and groove sealing prevent caries in children?; and 5) Does fluoride not protect teeth? Score 1 was assigned for agreeing and 0 for disagreeing/not knowing. A score ≥ 3 indicated good, whereas ≤ 3 indicated poor. This phenomenon was labeled as literacy. Children's oral health behavior (control variable) The control variables primarily included the basic situation of the child, such as sex and age. Oral health-related behaviors primarily included the following questions: 1) Do you brush your teeth more than two times a day? (Control 1); 2) Do you eat sugary snacks more than three times a day? (Control 2); 3) Does your child not clean their mouth after eating before bed? (Control 3); and 4) Does your child chew hard objects often? (Control 4). We did not focus on the socioeconomic covariates of the parents because the sample was selected from similar types of schools in the district comprising parents with similar socioeconomic conditions and literacy levels. This study was conducted across five primary schools in Chengdu, Sichuan Province, China. A total of 1,102 preschool children aged 3 to 6 years underwent professional oral examinations by four dentists aided by community family doctors and school teachers. Dental examinations were conducted to determine dental caries, based on the methods and standards described by the World Health Organization. The soft scale index can be assessed on a specific area (e.g. front or back teeth) and tooth surface (e.g. buccal or lingual). Dental examinations were conducted in natural light at the respective schools using a flat mouth mirror and periodontal probe, collecting data on tooth decay and gum health. Our research was conducted with the informed consent of the child's parents or legal guardians. The children's parents participated in questionnaire-based surveys, which collected information about their oral health literacy, health behaviors, and children’s health behaviors. For this research, informed consent was obtained from the parents or legal guardians of all children aged under 16 years. In addition, informed consent to participate form was signed by each participant in the study. The inclusion criteria for children were as follows: (1) aged between 3 and 6 years; (2) with clear consciousness and can follow basic instructions; (3) without serious systemic diseases or acute dental problems; and (4) can cooperate during an oral examination. The inclusion criteria for parents were as follows: (1) served as the child's caregiver, familiar with their living habits; (2) willing to participate in the questionnaire survey and cooperate with the staff for its completion; and (3) can understand and respond to the survey questions. The exclusion criterion was a situation where the parents and child did not live together. The questionnaire primarily included the following three sections: (i) children's oral health behavior, (ii) parents’ intervention in children's oral health behavior, and (iii) parents’ oral health literacy. The questionnaire on common oral health behaviors among parents and children was summarized and evaluated by the literature method. Simultaneously, the responses were sorted into binary variables based on the research purpose. As questions with yes or no answers are typically non-scale questions, direct reliability and validity analysis is not feasible. Therefore, our research referred to expert opinions, conducted a pre-investigation within a small range, and finally developed a formal questionnaire to ensure its reliability. Questions about parents’ oral health literacy were primarily developed from the National Oral Epidemiological Survey of China, which has good reliability and validity. Concurrently, to ensure data reliability, school teachers underwent relevant training. Additionally, stomatologists and school teachers assisted the parents in completing the on-site investigation and unified recovery. All the contents of the questionnaire are attached in the last part of the paper. Dental caries and tartar among the children were the independent variables. Based on the decay-missing-filled teeth (DMFT) index, teeth or cary-filled teeth are recorded as caries DT, missing teeth because of any reason are classified as missing MT, and the total number of filled teeth without secondary caries is classified as filled FT, DT, MT and FT. We categorized the DMFT into dichotomous variations, with no caries recorded as 0 and one or more teeth recorded as 1. Plaque index (PI) was used to describe the periodontal status. PI is a qualitative index to evaluate plaque and its degree. It is scored on a scale of 0 to 3. Additionally, PI is sorted into binary variables, where the value without soft scale is recorded as 0 and higher is recorded as 1. Parents' health behaviors primarily included the following questions: 1) Do you help your child brush his teeth? (Behavior 1). 2) Does provide a fluoride toothpaste? (Behavior 2). 3) Does teach your child to floss? (Behavior 3). 4) Does develop a habit of rinsing your child's mouth with water after meals? (Behavior 4). 5) Does take your child to the dentist regularly? (Behavior 5). All the behavioral variables of were dichotomous and conformed to the Bernoulli distribution. Health literacy was primarily assessed based on the following questions: 1) Is it important to protect children's teeth at 6 years of age?; 2) The bag day must be corrected, and will be inherited?; 3) Can untreated bad baby teeth become good later?; 4) Can cavity and groove sealing prevent caries in children?; and 5) Does fluoride not protect teeth? Score 1 was assigned for agreeing and 0 for disagreeing/not knowing. A score ≥ 3 indicated good, whereas ≤ 3 indicated poor. This phenomenon was labeled as literacy. The control variables primarily included the basic situation of the child, such as sex and age. Oral health-related behaviors primarily included the following questions: 1) Do you brush your teeth more than two times a day? (Control 1); 2) Do you eat sugary snacks more than three times a day? (Control 2); 3) Does your child not clean their mouth after eating before bed? (Control 3); and 4) Does your child chew hard objects often? (Control 4). We did not focus on the socioeconomic covariates of the parents because the sample was selected from similar types of schools in the district comprising parents with similar socioeconomic conditions and literacy levels. Benchmark model The independent variables were sorted as the DMFT index and PI into binary variables. The Probit regression model was used for measurements. The preliminary model was established as follows: 1 [12pt]{minimal} $$Probit[{Y}_{i}]={ }_{0i}+{ }_{1i} {Behavior}_{i}+{ }_{2i}{Control}_{i}+{ }_{i}$$ P r o b i t [ Y i ] = β 0 i + β 1 i Behavior i + β 2 i Control i + ε i In formula , [12pt]{minimal} $${Y}_{i}$$ Y i [12pt]{minimal} $$DMFT PI$$ DMFTPI [12pt]{minimal} $$. {Behavior}_{i}$$ is the dependent variable . Behavior i is the independent variable, which is the sum of respective items. [12pt]{minimal} $${Control}_{i}$$ Control i represents a group of exogenous control variables [12pt]{minimal} $$, { }_{1i}$$ , β 1 i , represents the parameter of interest, and [12pt]{minimal} $${ }_{i}$$ ε i represents the random error term. Instrumental variable model Two-way causality and missing variables in the identification may have affected our findings, thus introducing bias in the estimated effect. For example, parents' health-related behaviors and children's oral health often display a two-way effect, potentially generating reverse causality. Children's oral health is affected by themselves, their families, and society. Additionally, missing variables may exist while identifying the effects. Therefore, the Instrumental variable (IV) was used to resolve the endogenous problems . According to the theory of health behavior, oral health literacy among parents is relatively exogenous because it does not directly affect their children's health status. Therefore, parents' oral health literacy was selected as an instrumental variable. The model extended the two-stage least squares linear modeling framework of instrumental variables to the nonlinear model as follows. 2 [12pt]{minimal} $$Probit[{Behavior}_{i}]={ }_{0i}+{ }_{1i}{literacy}_{i}+{ }_{2i}{Control}_{i}+{ }_{i}$$ P r o b i t Behavior i = α 0 i + α 1 i literacy i + α 2 i Control i + ρ i In formula , [12pt]{minimal} $${Behavior}_{i}$$ Behavior i represents the expression dependent variable, whereas [12pt]{minimal} $${literacy}_{i}$$ literacy i represents the instrumental variable. [12pt]{minimal} $${ }_{0i}$$ α 0 i is the intercept, [12pt]{minimal} $${ }_{1i}$$ α 1 i is the parameter of interest, and [12pt]{minimal} $${Control}_{i}$$ Control i is a set of exogenous control variables. Mediating effect model To confirm the mediating effect between parents' health-related behaviors and children's oral health status, we selected oral health behaviors as the mediating variable. Additionally, an adopted stepwise regression method and sober test were utilized to establish the effect . The following three models were established: 3 [12pt]{minimal} $${Y}_{i}={ }_{0i}+{ }_{1i} {literacy}_{i}+{{ }_{i}Control}_{i}+{ }_{1i}$$ Y i = δ 0 i + δ 1 i literacy i + δ i C o n t r o l i + μ 1 i 4 [12pt]{minimal} $${Behavior}_{i}={ }_{0i}+{ }_{1i}{literacy}_{i}+{{ }_{i}Control}_{i}+{ }_{2i}$$ Behavior i = ∂ 0 i + ∂ 1 i literacy i + ∂ i C o n t r o l i + μ 2 i 5 [12pt]{minimal} $${Y}_{i}={ }_{0i}+{ }_{1i}{Behavior}_{i}+{ }_{2i}{literacy}_{i}+{Control}_{i}+{ }_{3i}$$ Y i = γ 0 i + γ 1 i Behavior i + γ 2 i literacy i + Control i + μ 3 i Where [12pt]{minimal} $$, {Y}_{i}(DMFT PI)$$ , Y i ( D M F T P I ) is the dependent variable in Eqs.  and . [12pt]{minimal} $${Behavior}_{i}$$ Behavior i is the intermediary variable in Eq. . [12pt]{minimal} $${ }_{1i}{, }_{1i, }{ }_{1i}$$ δ 1 i , γ 1 i , and ∂ 1 i denote the coefficients of interest, indicating the total, direct, and indirect effects, respectively. The intermediate effect exists if both Eqs.  and are significant simultaneously. The independent variables were sorted as the DMFT index and PI into binary variables. The Probit regression model was used for measurements. The preliminary model was established as follows: 1 [12pt]{minimal} $$Probit[{Y}_{i}]={ }_{0i}+{ }_{1i} {Behavior}_{i}+{ }_{2i}{Control}_{i}+{ }_{i}$$ P r o b i t [ Y i ] = β 0 i + β 1 i Behavior i + β 2 i Control i + ε i In formula , [12pt]{minimal} $${Y}_{i}$$ Y i [12pt]{minimal} $$DMFT PI$$ DMFTPI [12pt]{minimal} $$. {Behavior}_{i}$$ is the dependent variable . Behavior i is the independent variable, which is the sum of respective items. [12pt]{minimal} $${Control}_{i}$$ Control i represents a group of exogenous control variables [12pt]{minimal} $$, { }_{1i}$$ , β 1 i , represents the parameter of interest, and [12pt]{minimal} $${ }_{i}$$ ε i represents the random error term. Two-way causality and missing variables in the identification may have affected our findings, thus introducing bias in the estimated effect. For example, parents' health-related behaviors and children's oral health often display a two-way effect, potentially generating reverse causality. Children's oral health is affected by themselves, their families, and society. Additionally, missing variables may exist while identifying the effects. Therefore, the Instrumental variable (IV) was used to resolve the endogenous problems . According to the theory of health behavior, oral health literacy among parents is relatively exogenous because it does not directly affect their children's health status. Therefore, parents' oral health literacy was selected as an instrumental variable. The model extended the two-stage least squares linear modeling framework of instrumental variables to the nonlinear model as follows. 2 [12pt]{minimal} $$Probit[{Behavior}_{i}]={ }_{0i}+{ }_{1i}{literacy}_{i}+{ }_{2i}{Control}_{i}+{ }_{i}$$ P r o b i t Behavior i = α 0 i + α 1 i literacy i + α 2 i Control i + ρ i In formula , [12pt]{minimal} $${Behavior}_{i}$$ Behavior i represents the expression dependent variable, whereas [12pt]{minimal} $${literacy}_{i}$$ literacy i represents the instrumental variable. [12pt]{minimal} $${ }_{0i}$$ α 0 i is the intercept, [12pt]{minimal} $${ }_{1i}$$ α 1 i is the parameter of interest, and [12pt]{minimal} $${Control}_{i}$$ Control i is a set of exogenous control variables. To confirm the mediating effect between parents' health-related behaviors and children's oral health status, we selected oral health behaviors as the mediating variable. Additionally, an adopted stepwise regression method and sober test were utilized to establish the effect . The following three models were established: 3 [12pt]{minimal} $${Y}_{i}={ }_{0i}+{ }_{1i} {literacy}_{i}+{{ }_{i}Control}_{i}+{ }_{1i}$$ Y i = δ 0 i + δ 1 i literacy i + δ i C o n t r o l i + μ 1 i 4 [12pt]{minimal} $${Behavior}_{i}={ }_{0i}+{ }_{1i}{literacy}_{i}+{{ }_{i}Control}_{i}+{ }_{2i}$$ Behavior i = ∂ 0 i + ∂ 1 i literacy i + ∂ i C o n t r o l i + μ 2 i 5 [12pt]{minimal} $${Y}_{i}={ }_{0i}+{ }_{1i}{Behavior}_{i}+{ }_{2i}{literacy}_{i}+{Control}_{i}+{ }_{3i}$$ Y i = γ 0 i + γ 1 i Behavior i + γ 2 i literacy i + Control i + μ 3 i Where [12pt]{minimal} $$, {Y}_{i}(DMFT PI)$$ , Y i ( D M F T P I ) is the dependent variable in Eqs.  and . [12pt]{minimal} $${Behavior}_{i}$$ Behavior i is the intermediary variable in Eq. . [12pt]{minimal} $${ }_{1i}{, }_{1i, }{ }_{1i}$$ δ 1 i , γ 1 i , and ∂ 1 i denote the coefficients of interest, indicating the total, direct, and indirect effects, respectively. The intermediate effect exists if both Eqs.  and are significant simultaneously. The basics A total of 1,102 children and parents were included in our study. The children demonstrated a DFMT index of 1.58, with a PI of 0.68, zero caries in 646 (58.6%), and zero gum disease in 517 (46.9%) children. A chi-square test was conducted. The DMFT index in children increased with their age ( P < 0.01) (Table ). The children demonstrated better oral health behavior. The better the oral health behavior and health literacy in parents, the lower the DMFT index, PI, and oral health status in children ( P < 0.01). These results were consistent with our hypothesis. Baseline regression and endogenous results Baseline regression and endogenous processing of DMFT index We included the statistically significant covariates from the Chi-square test in the regression model for modeling and analysis. (1), (3), (5), (7), and (9) are the estimated results of baseline regression. By contrast, (2), (4), (6), (8), and (9) are the estimated results after considering endogeneity. The results in columns (1) to (4) suggest a positive influence of parents' oral health behavior on children's oral health status (significance 1%). After controlling for the endogenous variables, the results in columns (6) to (10) indicate a significantly enhanced effect. The regression results in columns (5) and (10) suggest that regular dentist visits by children were insignificant, and the difference was significant after controlling for endogeneity. Thus, parents' oral health behaviors can improve children's oral health status (Table ). Baseline regression and endogenous processing of PI index We incorporated the statistically significant covariate from the Chi-square test into the regression model for modeling and analysis. (1) to (5) and (6) to (10) are the estimated results after considering endogeneity. Parents' oral health behaviors positively influenced children's oral health status. The effect was significantly enhanced after controlling for endogeneity (Table ). Analysis results of the intermediary effect model We verified the mediating effect through the following three aspects of empirical analysis: (i) the influence of parents' oral health literacy on children's oral health status; (ii) the influence of oral health behaviors on the mediating variables; and (iii) the influence of parents' health literacy on children's oral health after controlling for the mediating variables. For the DMFT index, Behaviors 1 to 4 demonstrated a mediating effect, whereas Behavior 5 did not (Table ). For PI, Behaviors 1 to 5 demonstrated a mediating effect. Additionally, we utilized the sober test to evaluate the indirect effects of parents' health behaviors on children's oral health and to determine the mediating role between the independent and dependent variables. A total of 1,102 children and parents were included in our study. The children demonstrated a DFMT index of 1.58, with a PI of 0.68, zero caries in 646 (58.6%), and zero gum disease in 517 (46.9%) children. A chi-square test was conducted. The DMFT index in children increased with their age ( P < 0.01) (Table ). The children demonstrated better oral health behavior. The better the oral health behavior and health literacy in parents, the lower the DMFT index, PI, and oral health status in children ( P < 0.01). These results were consistent with our hypothesis. Baseline regression and endogenous processing of DMFT index We included the statistically significant covariates from the Chi-square test in the regression model for modeling and analysis. (1), (3), (5), (7), and (9) are the estimated results of baseline regression. By contrast, (2), (4), (6), (8), and (9) are the estimated results after considering endogeneity. The results in columns (1) to (4) suggest a positive influence of parents' oral health behavior on children's oral health status (significance 1%). After controlling for the endogenous variables, the results in columns (6) to (10) indicate a significantly enhanced effect. The regression results in columns (5) and (10) suggest that regular dentist visits by children were insignificant, and the difference was significant after controlling for endogeneity. Thus, parents' oral health behaviors can improve children's oral health status (Table ). Baseline regression and endogenous processing of PI index We incorporated the statistically significant covariate from the Chi-square test into the regression model for modeling and analysis. (1) to (5) and (6) to (10) are the estimated results after considering endogeneity. Parents' oral health behaviors positively influenced children's oral health status. The effect was significantly enhanced after controlling for endogeneity (Table ). We included the statistically significant covariates from the Chi-square test in the regression model for modeling and analysis. (1), (3), (5), (7), and (9) are the estimated results of baseline regression. By contrast, (2), (4), (6), (8), and (9) are the estimated results after considering endogeneity. The results in columns (1) to (4) suggest a positive influence of parents' oral health behavior on children's oral health status (significance 1%). After controlling for the endogenous variables, the results in columns (6) to (10) indicate a significantly enhanced effect. The regression results in columns (5) and (10) suggest that regular dentist visits by children were insignificant, and the difference was significant after controlling for endogeneity. Thus, parents' oral health behaviors can improve children's oral health status (Table ). We incorporated the statistically significant covariate from the Chi-square test into the regression model for modeling and analysis. (1) to (5) and (6) to (10) are the estimated results after considering endogeneity. Parents' oral health behaviors positively influenced children's oral health status. The effect was significantly enhanced after controlling for endogeneity (Table ). We verified the mediating effect through the following three aspects of empirical analysis: (i) the influence of parents' oral health literacy on children's oral health status; (ii) the influence of oral health behaviors on the mediating variables; and (iii) the influence of parents' health literacy on children's oral health after controlling for the mediating variables. For the DMFT index, Behaviors 1 to 4 demonstrated a mediating effect, whereas Behavior 5 did not (Table ). For PI, Behaviors 1 to 5 demonstrated a mediating effect. Additionally, we utilized the sober test to evaluate the indirect effects of parents' health behaviors on children's oral health and to determine the mediating role between the independent and dependent variables. The complex association of parents' oral health knowledge, attitudes, and behaviors with children's oral health behavior and dental caries has been investigated to improve children's oral health . Researchers have identified numerous factors associated with dental caries . However, their direct or indirect effects on oral health outcomes remain unclear. In this study, after controlling for covariates, such as children's oral health behavior, the DMFT index was positively correlated with parents' oral health behavior . Moreover, this positive correlation was significantly enhanced upon using parents' health literacy as an instrumental variable, indicating an endogenous problem in parents' oral health behavior toward children . Particularly, taking children to the dentist regularly was correlated with good oral health in children . Therefore, researchers should explore other parents' behaviors associated with children's oral health management, investigate the multi-dimensional factors leading to gum disease and dental caries, and identify effective intervention measures to improve children's oral health . Overall, parents help children brush their teeth, provide fluoride toothpaste, cultivate after meals, floss, and other healthy behaviors, all of which positively influence children's oral health . For the mediating effect, we utilized the Knowledge, Attitude, Belief, and Practice model (KABP) of health-related behavior change. Parents' health literacy could change children's oral health status through behavior . Additionally, parents' oral health behavior toward children was considered the mediating variable to explore the association . Parents' self-efficacy in helping their children brush their teeth twice daily exerted the highest mediating effect on their children's oral health . This is because brushing is directly associated with health Other mediating effects were consistent but relatively weak. However, taking children to the dentist regularly negatively affected dental caries . This is because children have dental caries and other oral diseases, which increases the number of visits to the dentist. These findings indicated a two-way causal association, which warrants investigation . Researchers should utilize health-related theories to design and evaluate interventions to improve children's oral health behaviors, particularly in early childhood when children are incapable of independent behavior. Parents are critical in shaping positive attitudes in children and promoting their oral health behaviors at home, consistent with our findings . Therefore, parents should first participate in planning and implementing oral health education and promotion programs to improve their children's oral health . The strengths of our research were as follows: Most studies primarily consider parents' health behaviors when measuring their health literacy, whereas we defined health literacy and behavior as aspects of children's oral health and management behaviors . Our approach was more targeted and can better explain the influence of parents' health management behaviors on children's oral health. Moreover, we used the instrumental variable method and intermediary effect model to confirm the association of parents' oral health literacy and health behavior with children's dental caries, compensating for the missing variables and two-way causal endogeneity not considered in previous studies . However, the study has several limitations. First, we did not conduct a prospective study; thus, we could not determine the cause and effect of the associations. Second, the program was implemented in a school of learning; therefore, the results may have been influenced by its activities . Finally, insufficient evidence supports our findings regarding regular visits to the dentist. Parents' oral health literacy and behavior are associated with their children's oral health status. Identifying the influencing factors and improving parents' oral health behavior can enhance this correlation and solve the endogenous problem. Additionally, improving parents' oral health literacy can advance children's oral health through the mediating effects of parents helping children brush their teeth, providing fluoride toothpaste, and cultivating post-meal gargling, flossing, and other health behaviors. Parents play an important role in the oral health status of preschool children aged 3 to 6 years. Oral health in children can be effectively improved by formulating oral health interventions for parents. Supplementary Material 1. Supplementary Material 2.
ICMR National Virtual Centre for Clinical Pharmacology with Network of Rational Use of Medicines & Product Development Centres
cdcc7b69-8067-41be-96ae-87d5257efd84
9210526
Pharmacology[mh]
Clinical Pharmacology involves the development of new drugs; their application as therapeutic agents, and study of adverse effects in individuals and society . The Indian Council of Medical Research (ICMR) has supported the development of clinical pharmacology in India over the last 50 yr through its extramural and intramural programmes by way of training programmes for capacity building and advanced research activities . The training programmes also imparted training to the participants from other countries. The Centres of Advanced Research (CAR) were set up in Mumbai, Chandigarh, Puducherry and Hyderabad for research on pharmacokinetics, therapeutic drug monitoring (TDM), pharmacovigilance, clinical trials, pharmacodynamics, pharmacogenetics, traditional medicines, relevant to public health, development of national policies, drug development and education, attracting grants from agencies such as WHO . In 2010, a brain storming session recommended creation of an Institute of Clinical Pharmacology, with public health orientation for safe, effective and economic products and rational use of medicines for Indian population . However, it could not be started due to paucity of funds. The need for an institute was again reiterated in a review of clinical pharmacology research in India for developing products for Indian population and rational use of medicines. Furthermore, it was noted that drug development in academia and Government funded institutions is hampered by inadequate trained manpower, lack of interaction between industry and academia/public research institutions , and between basic sciences and clinical researchers. In view of above and current scientific developments and to further strengthen clinical pharmacology towards healthcare needs of the country, the National virtual Centre for Clinical Pharmacology (NvCCP) with a network of Product Development Centres (PDCs) to promote drug development in line with the New Drugs and Clinical Trials Rules 2019 notified on 19 March, 2019 and Rational Use of Medicines Centres (RUMCs) for cost-effective use, with a Technical Advisory Group (TAG) (deemed virtual centre) of experts from different disciplines for guidance and monitoring progress of these activities was set up in 2019. The envisaged objectives and output of the PDCs were as follows: ( i ) to evaluate (20/yr) completed research projects, for suitability to develop products for human use; ( ii ) recommend suitable products for further validation, studies for investigational new drug (IND) and to develop IND application; ( iii ) carry out Phase I, II, and III clinical trials (two/year); ( iv ) carry out studies for evidence/provide evidence based recommendations for safe and effective use of marketed products using TDM, biomarkers and genetic tests; and (two/year); ( v ) carry out studies for evidence/provide evidence-based recommendation for standard treatment guidelines for public health/Government programmes (two/year). The primary impact will be the development of national asset for conducting clinical trials, publications, training, capacity building, development of guidelines for minimum/optimum requirements for conducting clinical trials, standard operating procedures (SOPs) for clinical trial related activities. Data from clinical trials of marketed drugs will provide evidence base for policies, practice, cost-saving strategies and evaluated recommended projects/products, if successful, will lead to safe and effective products. Eleven institutions and investigators were identified for PDCs, based on their prior work, initiatives taken, publications, departmental infrastructure, faculty that could contribute, availability of collaborating institutes, previous grants received and were approved in 2019. There were four PDCs in Mumbai, Maharashtra, two in Telangana, and one each in Chandigarh, New Delhi, Lucknow, Kolkata and Patna. During the first year, these PDCs evaluated completed research projects funded by ICMR and shortlisted five projects for further development. Guidelines for infrastructure and facilities and SOPs required for Phase I studies were prepared. The PDCs also conducted studies for population pharmacokinetics of hydroxychloroquine (HCQ) in healthcare workers and COVID vaccine trials. Widespread overuse, inappropriate selection of antimicrobials and high level of polypharmacy leading to adverse drug reactions (ADR), antimicrobial resistance, lack of effect, increasing cost to patient and society have been noted . Globally, half of the medicines use has been found to be inappropriate . In UK, 7-10 per cent of prescriptions of newly graduated doctors were found to contain errors . Inadequate education, ineffective, insufficient regulation around appropriate medication use were found to be the important reasons . Hence, prescribing competency for medical graduates was included in the curriculum . In view of this, in 2019, the ICMR set up a network of RUMCs (rational use of medicine centres) in the departments of Pharmacology of various teaching medical institutions located in different parts of the country with the following envisaged objectives and output: ( i ) Prescription audit/research, evaluate, analyze, interpret, for WHO indicators, inappropriateness, use of irrational fixed-dose drug combinations (FDCs), non-national list of essential medicines (NLEMs), identify gaps and errors, (for 1000 prescriptions per year), contribute to national database, recommend corrective steps; ( ii ) develop online training course for prescribing skills (PSC) (for interns, Government medical officers, private general practitioners); ( iii ) based on the Medical Council of India (MCI) curriculum, university curriculum, with prioritization based on published literature, experience, develop curriculum (for 2 modules, review two modules developed by other centre); ( iv ) develop training modules for the course based on standard treatment guidelines, standard treatment workflows and other resources, (for 2 modules, review 2 modules developed by other centre); ( v ) develop assessment questions, validate for two modules, review two modules developed by the other centre. The envisaged impact of these centres was as follows: the online PSC made available to all interns, practitioners in the country. Pre-test and post-test assignments will add to the training experience and assess change in knowledge. Prescription research will evaluate approximately 10,000 prescriptions. Data of all centres will be aggregated and published and will also be used in revising the content of the online course and provide inputs for NLEM revisions. Fifteen non-ICMR institutions and investigators were identified based on their prior work, initiatives taken, publications, departmental infrastructure, faculty that could contribute, link with collaborating institutes, and previous grant received. Five RUMCs were also set up at the ICMR institutes. These centres were approved in 2019 and set up in the same. There were two centres each in Kolkata, Ludhiana and New Delhi, and one each in Mumbai, Puducherry, Ahmedabad, Chandigarh, Patna, Bhopal, Vadodra, Vellore and Bangaluru. The six centres in ICMR Institutes were at National Institute for Research in Reproductive and Child Health (NIRRCH) and National Institute of Immunohematology (NIIH), Mumbai, National Institute of Cholera and Enteric Diseases (NICED), Kolkata, National Institute for Research in Tuberculosis (NIRT), Chennai and The Rajendra Memorial Research Institute of Medical Sciences (RMRIMS), Patna, National institute of Epidemiology (NIE), Chennai. During the first year, RUMCs constituted RUMC committees of clinicians from clinical departments and community medicine, and developed curriculum, training modules and assessment questions for prescribing skill course (PSC). The online course for PSC was launched in September 2020 by the Director General, ICMR, with the ICMR-National Institute of Epidemiology through Government of India SWAYAM portal. Approximately 5000 prescriptions were captured by the RUMCs and analyzed. Safety and efficacy of hydroxychloroquine for prophylaxis against COVID-19 in healthcare workers was also studied. This initiative of the ICMR has a vision to create a national platform to promote new therapeutic products as an outcome of research from Indian institutions, to create competency for rational use of medicines and has a goal of providing cost-effective healthcare. The research activities are undertaken under a virtual center with network of various centres, funded for five years. Subsequently, there will be a need to establish a permanent centre with physical infrastructure that will enable a robust mechanism for catering to the research on different aspect of product development and other areas of clinical pharmacology with translational potential for the benefit of the Indian population.
Integrating Transcriptomics and Metabolomics to Comprehensively Analyze Phytohormone Regulatory Mechanisms in
aa9278da-94dc-4b42-b4d6-3f07ef21844c
11855671
Biochemistry[mh]
Light is an important environmental signal on which plants depend for survival; however, it is also a source of abiotic stress for plants . UV-B radiation (UV-B) (280–315 nm) light is an intrinsic part of the sunlight that reaches the Earth’s surface. Plants serve as sequestering organisms, and therefore, exposure to UV-B throughout their life cycle is unavoidable. Although the atmospheric ozone layer absorbs most of the UV-B, about 5% of solar UV-B reaches the Earth’s surface . UV-B can be potentially harmful to plants. It not only impairs the photosynthetic system of plants but also reduces chlorophyll content, causes DNA damage and affects DNA replication and transcription, thereby inhibiting plant development and metabolism . Hormones have a major regulatory role in plant growth, development and adaptation to the environment. When plants defend themselves against abiotic stresses, multiple hormones cross-talk and co-regulate. Each hormone has its own unique function in the signal transduction process, and the mechanism of action for regulating plant stress tolerance is also different . Low-temperature stress induces the production of abscisic acid (ABA) to activate ABA-dependent signaling response pathways and increase ABA content in plants, thereby positively regulating plant tolerance to adversity stress . Abiotic stress can induce the expression of genes related to jasmonic acid (JA) synthesis in response to various unfavorable conditions . Salicylic acid is involved in regulating plant response to UV-B stress, and SA serves as a signal to reduce oxidative stress by triggering the up-regulation of antioxidants, thereby promoting growth and photosynthesis . It was found that cytokinin (CK) can cross-talk with ethylene signaling to co-regulate plant response to low-temperature signaling . It has been shown that cold stress causes GA-induced degradation of DELLA deterrent proteins in Arabidopsis thaliana , which controls many key developmental processes and responses to stresses such as cold . Therefore, the study of changes in hormone levels in plants as well as phytohormone signaling networks is essential for understanding plant responses to various types of environmental stresses. Rhododendron chrysanthum Pall. ( R. chrysanthum ) is a rhododendron growing in high-altitude and low-temperature mountainous areas, which is one of the rare germplasm resources in the world . After a long adaptive evolutionary process, R. chrysanthum has evolved resistance to abiotic stresses such as low temperature, drought and strong UV-B, and thus can be used as a good experimental material for exploring plants’ resistance to UV-B . Existing studies on R. chrysanthum have focused on the response of its metabolic pathways to UV-B . Hormonal studies on R. chrysanthum , on the other hand, have focused on the role played by exogenous ABA in regulating the metabolic pathways of R. chrysanthum . Meanwhile, the complete study of changes in various phytohormones, their biosynthesis, and signaling-related genes and metabolites in R. chrysanthum under UV-B stress is less studied. Not only are individual metabolic pathways regulated by environmental factors, but the homeostasis of the entire metabolic network is also affected . Compared to traditional physiological and biochemical studies and genetic phenotyping, histological research techniques allow for a more systematic observation of physiological changes in plants and may offer greater potential for the discovery of new genes . In recent years, the association analysis of various organizational platforms has emerged as a new trend in the field of plant metabolism research . In this experiment, seven important hormones were studied transcriptomically and metabolomically in R. chrysanthum . The present study is a comprehensive study of various types of phytohormones that have been detected in R. chrysanthum , from biosynthesis to signaling, based on previous studies . Changes in metabolite levels and pathway gene expression levels during the defense of R. chrysanthum against UV-B demonstrated the mechanisms by which different hormones regulate plant leaves against abiotic stresses. This is the first complete transcriptomic and metabolomic analysis of phytohormonal regulation during the defense of R. chrysanthum against UV-B radiation. The findings enrich existing insights into the mechanisms of phytohormone action and provide new perspectives in the field of phytohormone response to UV-B. 2.1. Differential Photosynthetic Performance upon UV-B Exposure By comparing the photosynthetic characteristics of UV-B radiation in R. chrysanthum , such as Fv/Fm and Fm, the effect of UV-B radiation on R. chrysanthum was observed. The fluorescence of chlorophyll was measured by IMAGING PAM. The results showed that Fv/Fm and Fm decreased significantly after UV-B radiation . In order to further understand the changes in the electron transport chain of the photosynthetic system without leaves under the same treatment, five JIP assay parameters were selected for reflecting the changes in the activity of the electron transport chain. As shown in , UV-B radiation significantly reduced the performance index of the leaves of R. chrysanthum (PIABS), the quantum yield of the light energy absorbed by the Photosystem II (PSII) reaction centers for electron transport (φEo), the potential activity of PSII (Fv/Fo) and the captured energy per reaction center of PSII (TRo/RC). UV-B radiation likewise led to a significant decrease in the chlorophyll content of R. chrysanthum ( A). Taken together, the changes in chlorophyll fluorescence parameters suggest that UV-B radiation can have severe adverse effects on R. chrysanthum . Therefore, R. chrysanthum will respond to this stress through changes in its own metabolic pathways. This experiment used LC-MS/MS (liquid chromatography–tandem mass spectrometry) to analyze the metabolism of CK and UV-B-irradiated R. chrysanthum leaves ( B). Many different metabolites were detected in the metabolic pathways of amino acids, lipids, and other macromolecules after UV-B radiation of R. chrysanthum . Therefore, we sequenced its leaves. The number of relevant differentially expressed genes in leaves after UV-B radiation was obtained based on sequencing results. The most differential genes (DEGs) were mainly concentrated in the carbohydrate metabolism pathway, with 116 up-regulated genes and 120 down-regulated genes ( C). 2.2. Abscisic Acid (ABA) Production and Signal Transduction in R. chrysanthum in Response to UV-B The role of the above key metabolic pathways in actively responding to UV-B radiation cannot be ignored, and at the same time, these metabolic pathways are also regulated by phytohormone signaling networks. Therefore, the present study focused on comprehensively analyzing the changes in the levels of various hormones, their biosynthesis, and DEGs in signaling pathways in R. chrysanthum under UV-B radiation . This study aimed to present a complete picture of the role of R. chrysanthum phytohormones in the defense of R. chrysanthum against UV-B radiation. The results indicate that during the stage of ABA synthesis, two genes exhibited differential expression. Among them, the expression level of ZEP in R. chrysanthum irradiated by UV-B was higher, while the expression level of CrtZ in control leaves was higher. In the ABA signaling pathway, the expression of PP2C and PYLs was down-regulated, while the expression of SnRK2 was up-regulated in R. chrysanthum after UV-B radiation . 2.3. Gibberellin (GA) Production and Signal Transduction in R. chrysanthum in Response to UV-B In GA biosynthesis, after UV-B radiation, the metabolites GA 53 , GA 19 , GA 20 , GA 24 , GA 1 , GA 3 and GA 4 were down-regulated, GA 15 and GA 9 were up-regulated, CYP701 expression was down-regulated and GA3ox expression was up-regulated. In the GA signal transduction pathway, the expression of GI1D was up-regulated and that of DELLA was down-regulated after UV-B radiation . 2.4. Jasmonic Acid Production and Signal Transduction in R. chrysanthum in Response to UV-B This study examined R. chrysanthum ’s jasmonic acid (JA) biosynthesis and signal transduction pathways to determine the metabolite content and expression patterns of DEGs . Under UV-B radiation, JA-ile, α-linolenic acid and JA contents were higher in the control group (CK group) than in the UV-B group, and OPDA contents were lower than in the UV-B group. Except for PLA , LOX2s , DAD1 , AOS , ACAA1 and MFP25 , biosynthesis genes were up-regulated in leaves after UV-B radiation, which catalyzed a series of reactions in the JA biosynthesis pathway. MYC2 and JAZ expression was down-regulated in response to UV-B radiation, and they are important components of the JA signaling pathway. 2.5. Auxin Production and Signal Transduction in R. chrysanthum in Response to UV-B Radiation The content of Indole-3-acetonitrile was higher and the content of Indole-3-acetate was lower in the leaves of R. chrysanthum under UV-B radiation. A total of seven differentially expressed genes of the auxin pathway were obtained, including TAA1, ALDH , TAA1 , CYP71A13 and amiE family genes. After UV-B radiation, the expressions of DDC and TAA1 were down-regulated, while the expressions of CYP71A13 , ALDH and amiE were up-regulated. Through differential gene screening, two differentially expressed genes were found in the auxin signaling pathway, namely AUX/IAA and ARF family genes. Analysis of gene expression profiles of each family showed that AUX/IAA and ARF gene families were down-regulated in the leaf midlobes of the UV-B group . 2.6. Salicylic Acid (SA) Production and Signal Transduction in R. chrysanthum in Response to UV-B Radiation We studied the content of metabolites, the expression of related genes and the SA signal transduction pathway in the process of R. chrysanthum SA synthesis. The phenylalanine pathway is the earliest discovered pathway for the synthesis of SA. The results show that UV-B radiation increased the contents of phenylalanine and SA in R. chrysanthum leaves, and the expression levels of TAT and HPD genes in R. chrysanthum leaves were higher than those in the CK group. The signal transduction pathway of SA is mainly NPR1-dependent, and the expression level of NPR1 gene was higher in leaves of the UV-B group . 2.7. Cytokinin (CK) Production and Signal Transduction in R. chrysanthum in Response to UV-B Radiation Through the analysis of the level of CK in the leaves of R. chrysanthum before and after UV-B radiation, it was found that the content of other metabolites increased significantly after radiation. In this study, we collected all aspects of genes related to various aspects of CK homeostasis. The results show that the level of CRE1/AHK4 increased in the CK synthesis pathway due to UV-B radiation in plant leaves, while Type-BARR expression decreased in the CK signal transduction pathway . 2.8. Ethylene Production and Signal Transduction in R. chrysanthum in Response to UV-B Radiation It can be seen from that the levels of carboxylic acid and ethylene metabolites in R. chrysanthum leaves decreased under UV-B radiation, and the expression levels of JA mtnK and TAT genes increased in the ethylene biosynthesis pathway, while AMD1 expression decreased. The expression level of CTR1 decreased in the ethylene signal transduction pathway. 2.9. Correlation Analysis of Various Phytohormones of R. chrysanthum with Photosynthetic Indicators In order to explore the relationship between various plant hormones under UV-B radiation and the key genes closely associated with them, their regulatory effects on the R. chrysanthum photosynthetic system were analyzed. The correlation between DEGs in various plant hormones, their pathways and the above photosynthetic physiological indexes was analyzed. The result show that the key factors involved in CK regulation of UV-B radiation in R. chrysanthum may include BARR and CRE genes. The ALDH , CYP71A13 , AUX/IAA and amiE genes in IAA may be related to protection against UV-B radiation. Under UV-B radiation, ethylene and SA have two kinds of DEGs with strong correlation, namely mtnK and CTR1 , and HPD and TAT-1 . GA, ABA and JA all involve a DEG in close association, CYP701-2 , SNRK2 and PAL , respectively. This experiment also found that phytohormones have an effect on plant photosynthesis, and that plants reduce photo-oxidative damage under unfavorable conditions through multiple layers of positive and negative regulation. For example, Ck, SA and JAs are promoters of photomorphogenesis, whereas ethylene, ABA, growth factors and GAs are negative regulators . By comparing the photosynthetic characteristics of UV-B radiation in R. chrysanthum , such as Fv/Fm and Fm, the effect of UV-B radiation on R. chrysanthum was observed. The fluorescence of chlorophyll was measured by IMAGING PAM. The results showed that Fv/Fm and Fm decreased significantly after UV-B radiation . In order to further understand the changes in the electron transport chain of the photosynthetic system without leaves under the same treatment, five JIP assay parameters were selected for reflecting the changes in the activity of the electron transport chain. As shown in , UV-B radiation significantly reduced the performance index of the leaves of R. chrysanthum (PIABS), the quantum yield of the light energy absorbed by the Photosystem II (PSII) reaction centers for electron transport (φEo), the potential activity of PSII (Fv/Fo) and the captured energy per reaction center of PSII (TRo/RC). UV-B radiation likewise led to a significant decrease in the chlorophyll content of R. chrysanthum ( A). Taken together, the changes in chlorophyll fluorescence parameters suggest that UV-B radiation can have severe adverse effects on R. chrysanthum . Therefore, R. chrysanthum will respond to this stress through changes in its own metabolic pathways. This experiment used LC-MS/MS (liquid chromatography–tandem mass spectrometry) to analyze the metabolism of CK and UV-B-irradiated R. chrysanthum leaves ( B). Many different metabolites were detected in the metabolic pathways of amino acids, lipids, and other macromolecules after UV-B radiation of R. chrysanthum . Therefore, we sequenced its leaves. The number of relevant differentially expressed genes in leaves after UV-B radiation was obtained based on sequencing results. The most differential genes (DEGs) were mainly concentrated in the carbohydrate metabolism pathway, with 116 up-regulated genes and 120 down-regulated genes ( C). The role of the above key metabolic pathways in actively responding to UV-B radiation cannot be ignored, and at the same time, these metabolic pathways are also regulated by phytohormone signaling networks. Therefore, the present study focused on comprehensively analyzing the changes in the levels of various hormones, their biosynthesis, and DEGs in signaling pathways in R. chrysanthum under UV-B radiation . This study aimed to present a complete picture of the role of R. chrysanthum phytohormones in the defense of R. chrysanthum against UV-B radiation. The results indicate that during the stage of ABA synthesis, two genes exhibited differential expression. Among them, the expression level of ZEP in R. chrysanthum irradiated by UV-B was higher, while the expression level of CrtZ in control leaves was higher. In the ABA signaling pathway, the expression of PP2C and PYLs was down-regulated, while the expression of SnRK2 was up-regulated in R. chrysanthum after UV-B radiation . In GA biosynthesis, after UV-B radiation, the metabolites GA 53 , GA 19 , GA 20 , GA 24 , GA 1 , GA 3 and GA 4 were down-regulated, GA 15 and GA 9 were up-regulated, CYP701 expression was down-regulated and GA3ox expression was up-regulated. In the GA signal transduction pathway, the expression of GI1D was up-regulated and that of DELLA was down-regulated after UV-B radiation . This study examined R. chrysanthum ’s jasmonic acid (JA) biosynthesis and signal transduction pathways to determine the metabolite content and expression patterns of DEGs . Under UV-B radiation, JA-ile, α-linolenic acid and JA contents were higher in the control group (CK group) than in the UV-B group, and OPDA contents were lower than in the UV-B group. Except for PLA , LOX2s , DAD1 , AOS , ACAA1 and MFP25 , biosynthesis genes were up-regulated in leaves after UV-B radiation, which catalyzed a series of reactions in the JA biosynthesis pathway. MYC2 and JAZ expression was down-regulated in response to UV-B radiation, and they are important components of the JA signaling pathway. The content of Indole-3-acetonitrile was higher and the content of Indole-3-acetate was lower in the leaves of R. chrysanthum under UV-B radiation. A total of seven differentially expressed genes of the auxin pathway were obtained, including TAA1, ALDH , TAA1 , CYP71A13 and amiE family genes. After UV-B radiation, the expressions of DDC and TAA1 were down-regulated, while the expressions of CYP71A13 , ALDH and amiE were up-regulated. Through differential gene screening, two differentially expressed genes were found in the auxin signaling pathway, namely AUX/IAA and ARF family genes. Analysis of gene expression profiles of each family showed that AUX/IAA and ARF gene families were down-regulated in the leaf midlobes of the UV-B group . We studied the content of metabolites, the expression of related genes and the SA signal transduction pathway in the process of R. chrysanthum SA synthesis. The phenylalanine pathway is the earliest discovered pathway for the synthesis of SA. The results show that UV-B radiation increased the contents of phenylalanine and SA in R. chrysanthum leaves, and the expression levels of TAT and HPD genes in R. chrysanthum leaves were higher than those in the CK group. The signal transduction pathway of SA is mainly NPR1-dependent, and the expression level of NPR1 gene was higher in leaves of the UV-B group . Through the analysis of the level of CK in the leaves of R. chrysanthum before and after UV-B radiation, it was found that the content of other metabolites increased significantly after radiation. In this study, we collected all aspects of genes related to various aspects of CK homeostasis. The results show that the level of CRE1/AHK4 increased in the CK synthesis pathway due to UV-B radiation in plant leaves, while Type-BARR expression decreased in the CK signal transduction pathway . It can be seen from that the levels of carboxylic acid and ethylene metabolites in R. chrysanthum leaves decreased under UV-B radiation, and the expression levels of JA mtnK and TAT genes increased in the ethylene biosynthesis pathway, while AMD1 expression decreased. The expression level of CTR1 decreased in the ethylene signal transduction pathway. In order to explore the relationship between various plant hormones under UV-B radiation and the key genes closely associated with them, their regulatory effects on the R. chrysanthum photosynthetic system were analyzed. The correlation between DEGs in various plant hormones, their pathways and the above photosynthetic physiological indexes was analyzed. The result show that the key factors involved in CK regulation of UV-B radiation in R. chrysanthum may include BARR and CRE genes. The ALDH , CYP71A13 , AUX/IAA and amiE genes in IAA may be related to protection against UV-B radiation. Under UV-B radiation, ethylene and SA have two kinds of DEGs with strong correlation, namely mtnK and CTR1 , and HPD and TAT-1 . GA, ABA and JA all involve a DEG in close association, CYP701-2 , SNRK2 and PAL , respectively. This experiment also found that phytohormones have an effect on plant photosynthesis, and that plants reduce photo-oxidative damage under unfavorable conditions through multiple layers of positive and negative regulation. For example, Ck, SA and JAs are promoters of photomorphogenesis, whereas ethylene, ABA, growth factors and GAs are negative regulators . Light energy induces a series of adversity responses in plants, whereby interactions between phytohormones and ROS can contribute to plant adversity adaptation . In plant leaves, chloroplasts can serve as the first line of defense to protect PSII reaction centers from light damage. For this reason, in terms of chlorophyll fluorescence parameters, we found that the fluorescence intensity of R. chrysanthum decreased successively after UV-B radiation, indicating that UV-B radiation reduced the reduction ability of the fast reducing PQ library and the slow reducing PQ library, and that the receptor side of the PSII reaction center was damaged. UV-B radiation significantly reduced the maximum photochemical efficiency (Fv/Fm) and potential activity (Fv/Fo) of PSII, whereas the decrease in Tro/Rc reflected an increase in the inactivation of the PSII reaction center . φEo represents the quantum yield of light energy absorbed by the reaction center for electron transfer and is able to reflect the conversion efficiency of light energy . UV-B radiation reduced photosynthetic pigments in R. chrysanthum , resulting in a decrease in the captured light energy of the photosynthetic system and the absorbed light energy of the photosynthetic reaction center while damaging the photosynthetic system and reducing the effective photosynthetic efficiency. This may be due to the effect of UV-B radiation on the membrane structure, resulting in fewer antenna complexes distributed in the membrane, and thus lower energy conversion and absorption efficiency . This down-regulation of the PSII reaction center may be a coping mechanism adopted by R. chrysanthum to maintain PSII energy conversion efficiency when they resist UV-B radiation. Ethylene is the only gas in plant hormones, and it plays an important role in plant growth, development and response to stress. Ethylene synthesis consists of two main enzymatic reactions: (1) 1-aminocyclopropane-1-carboxylicacid (ACC) and methionine react together; (2) ACC is converted to ethylene. These two steps are regulated by ACCsynthase (ACS) and ACCoxidase (ACCoxidase), respectively . From the perspective of ethylene signaling to the transcriptional regulation of ethylene downstream response factors, a complex ethylene signaling pathway model has been formed. In the correlation analysis, the ethylene content in R. chrysanthum was negatively correlated with mtnk under UV-B radiation, which may reduce the accumulation of ethylene by increasing gene expression. The CTR1 gene is positively correlated with the ethylene level in R. chrysanthum , and when the ethylene signal is present, ethylene is able to bind to the ethylene receptor (ETR) on the endoplasmic reticulum (ER) membrane and is then transduced by CTR1 and EIN2 . This signal is further amplified in an EIN3-mediated transcriptional activation cascade that activates the expression of ethylene response genes . In the absence of the ethylene signal, the ethylene receptor may deactivate the kinase activity of CTR1 in another way, phosphorylating the C-terminal domain of EIN2 , thereby preventing its participation in ethylene signal transduction . In conclusion, the accumulation and stabilization of the EIN3 protein is conducive to the activation of the ethylene signaling pathway, thus enhancing the role of ethylene in plants. The increase in CTR1 and EBF1/2 can directly or indirectly inhibit the signal transduction of ethylene and block the transmission of the ethylene signal, which plays an important negative regulatory role. These results suggest that CTRs and mtnK may be involved in the effect of ethylene on UV-B resistance in R. chrysanthum leaves. ABA is an important plant hormone which has many physiological functions, such as inhibiting growth, promoting shedding, promoting dormancy, causing stomatal closure, regulating seed embryo development, promoting fruit ripening and increasing stress resistance . Nevertheless, there was no significant change in ABA content after UV-B radiation, which indicates that the leaves of R. chrysanthum had certain resistance to UV-B radiation. A number of signaling intermediates associated with ABA responses have been identified by previous studies, and they are tightly controlled by intracellular signal transduction pathways . Progress is currently being made in exploring ABA signaling contributing to the construction of the PYL-PP2C-SnRK2 signaling model. The co-regulatory network of ABA metabolic pathways in R. chrysanthum showed that SnRK2 was negatively correlated with ABA expression levels . According to previous studies, PYR/PYLs are ABA receptors located at the top of the negative regulatory pathway which inhibit PP2Cs from controlling ABA signaling . Therefore, we speculate that PYL2 may interact with PP2C to inhibit phosphatase activity, allow for SnRK2 activation and target protein phosphorylation, and thus make the leaves have certain resistance to UV-B radiation. Cytokinin (CK) plays critical regulatory roles in phloem differentiation, chloroplast differentiation, microtubule differentiation, leaf senescence, apical dominance regulation and response to adversity . The decrease in CK levels caused by UV-B radiation may be related to the degree of photosynthesis in plant leaves when they are exposed to a larger photosynthetic capacity per unit leaf area. Studies on Arabidopsis thaliana and tobacco showed that the maximum photosynthetic rate, transpiration rate, stomatal conductance and mass per unit leaf area were reduced when the plants were exposed to light treatment, along with CK . The results of this experiment show that the CK content in the leaves of R. chrysanthum was positively correlated with ARR-B, indicating that UV-B radiation can cause some effects on R. chrysanthum . b-type ARRs are a class of transcription factors that activate the transcription of the A-type ARR gene. Previous studies have found that the expression of the a-type ARR gene leads to earlier flowering time, longer root systems, more lateral roots, earlier senescence and reduced CK sensitivity in transgenic plants, and plays a major role among the many transcription factors involved in CK signaling . ARR-B may regulate leaf resistance to radiation through the CK signaling pathway in R. chrysanthum . Auxin is a general term for indole-3-acetic acid (IAA) and its similarly acting analogs that occur naturally in plants . As a signaling compound that promotes and influences plant development and physiological changes, it affects almost the whole process of plant growth and development . UV-B stress changes auxin distribution in plants by affecting the auxin synthesis and transport process and influences downstream genes to make corresponding changes through the auxin signal transduction process. The analysis shows that growth hormone content was negatively regulated by ALDH , CYP71A13 and amiE and positively regulated by AUX/IAA . This indicates that UV-B radiation in the growth hormone biosynthesis pathway reduces growth hormone accumulation by increasing gene expression, and that Aux/IAA proteins are transcriptional repressors that mainly inhibit the activity of ARF transcription factors . In the signaling pathway, when growth hormone is sensed in the nucleus, it binds to the receptor TIR1/AFB , which promotes degradation of the Aux/IAA transcriptional repressor protein, releasing the ARF transcription factor and activating downstream gene expression . Salicylic acid (SA), a naturally occurring small-molecule phenolic in the plant kingdom, plays an important role in plant responses to biotic and abiotic stresses as a signaling molecule for plant disease resistance responses. In most plants, SA is synthesized mainly by the phenylalanine pathway. Phenylalanine is catalyzed by the enzyme phenylalanine dehydrogenase to produce cinnamic acid, which is then converted to SA by benzoic acid . The precursor to many polyphenols, such as flavonoids, is cinnamic acid . Studies have shown that UV-B stress can lead to 7–10 times or more up-regulation of endogenous SA content in plants . The results of this experiment show that SA content increased in the leaves of R. chrysanthum under UV-B radiation. Correlation analysis shows that TAT1 and HPD contents were positively correlated with SA. Therefore, we hypothesized that the higher SA accumulation in the leaves of R. chrysanthum under UV-B radiation might be related to the higher expression of the TAT1 and HPD genes. Certainly, this conjecture needs to be verified by further experiments. Jasmonic acids (JAs) are an important class of lipid-derived phytohormones widely distributed in higher plants and regulate a variety of physiological processes . JAs can act as endogenous signaling molecules to enhance plant resistance to different adversity stresses by regulating gene expression and subsequently accumulating secondary metabolites . JA content in the leaves of R. chrysanthum was reduced under UV-B radiation, and correlation analysis shows that PLA content was positively correlated with SA. PLA is a key enzyme in its metabolic synthesis pathway and is involved in JA biosynthesis. It has been shown that PLA2 with a molecular mass of 48,000 u was purified from the leaf membrane fractions of faba bean, and it was suggested that it has an important role in the production of linolenic acid . Gibberellins (GAs) are diterpenoid phytohormones that play a crucial role in the whole life cycle of plants . In recent years, numerous studies have shown that GAs significantly promote plant seed germination and seedling growth under adverse environmental conditions, as GAs can alleviate the inhibitory effects of stress on plants; currently there are up to a hundred GAs isolated from various types of organisms named in the order of their discovery . According to our results, the total content of GA compounds was detected to be lower in UV-B-irradiated R. chrysanthum leaves versus normal leaves, and correlation analysis shows that the content of the CYP701 gene in the GA synthesis pathway was negatively correlated with GA. The CYP701 gene is a related enzyme in the GA biosynthesis pathway. Therefore, the effect of GA on the resistance of R. chrysanthum leaves to UV-B radiation needs to be further investigated. In order to survive, plants must adapt to a wide range of abiotic and biotic stresses. Unfriendly environmental conditions, for example, increased light, a lack of water or low oxygen levels due to abiotic factors such as waterlogging, temperature extremes, salinity and pollutants, may decrease photosynthesis, leading to excess excitation energy in chloroplasts . Flexible interactions, complementarities and interactions between phytohormones that regulate photosynthesis are required during plant development and growth to optimize plant dynamics and adaptability in a continuously evolving environment. Chloroplast development is dependent on complex interactions between phytohormones, whereby the plant ensures the minimization of any possible photo-oxidative damage in chloroplasts during light-induced de-yellowing. For example, CKs, JAs and DELLA proteins directly or indirectly promote photomorphogenesis . In contrast, ethylene, growth hormone and GAs adversely regulate photomorphogenesis either by pif or by inhibiting B-GATAs . Plant growth and development are regulated by complementary phytohormones, including photosynthesis between different plant tissues and cells. CK positively regulates photosynthesis by expressing photosynthesis-related genes . In contrast, ABA negatively regulates photosynthesis by inhibiting stomatal formation. In addition, interactions between CK and growth hormone regulate plant photosynthesis, whereas cross-talk between growth hormone and ethylene inhibits ethylene biosynthesis and prolongs photosynthesis . Plant tolerance to environmental stresses and the manner in which these photoprotective mechanisms are activated vary from species to species, but typically a range of responses occur in plants, some of which are regulated by phytohormones. In fact, studies to more fully understand the effects of hormones on photosynthesis and photoprotection under abiotic stresses are not only important for a better comprehension of the underlying biological processes but may also contribute to the development of new crops with a more robust photosynthetic system. This study of phytohormonal regulation in R. chrysanthum under UV-B radiation provides valuable insights for enhancing plant stress tolerance. These insights can be applied to develop genetically engineered crops with improved UV-B tolerance and higher yields. Additionally, this study highlights the potential for metabolic engineering to boost the production of secondary metabolites with pharmaceutical value. The identified gene expression patterns can also serve as molecular tools for monitoring UV-B stress in plants. Based on these findings, we hypothesized that engineering modifications to these molecular pathways, such as enhancing CK synthesis or SA signaling, may help plants to better adapt and achieve growth advantages under UV-B radiation. This could not only support plant survival in the natural environment but may also provide a theoretical basis for the development of crop varieties tolerant to adversity stress in agricultural production. Overall, the results lay a foundation for developing climate-resilient plants and sustainable agricultural practices. 4.1. Plant Materials and Treatment R. chrysanthum was preserved in an artificial climate room at 18 °C (14 h light)/16 °C (10 h dark) under white fluorescent light at 50 umol (photon) m −2 s −1 . The tissue culture seedlings of R. chrysanthum , which had the same growth state for 8 months, were selected as the research material. The experimental materials were divided into experimental (UV-B group) and control groups (CK group). The CK and UV-B groups were transplanted into 1/4 MS medium. The test group was irradiated under UV-B treatment for 2 days, 8 h per day. The control group was irradiated under PAR (photosynthetically active radiation) treatment for two days, eight hours per day . To eliminate inter-individual differences, three biological replicates were performed in this study. The PAR irradiation treatment involved the placement of a 400 nm photofilm (Edmund, Filter Long 2IN SQ, Barrington, NJ, USA). The UV-B radiation treatment involved the measurement of filters (Edmund, Filter Long 2IN SQ, Barrington, NJ, USA). The artificial UV-B radiation source was a UV-B TL 20W/01RS fluorescent tube (Philips, UltravioletB TL 20 W/01 RS, Amsterdam, The Netherlands). The effective received irradiance of the samples was UV-B: 2.3 W/m2, PAR: 50 umol/ (m 2 ·s). 4.2. Identification and Quantification of Metabolites We strictly followed the experimental steps of a previous study . Using vacuum freeze-drying technology, we placed the biological samples in a lyophilizer (Scientz-100F, Ningbo Scientz Biotechnology Co., Ltd., Ningbo, China), then ground (30 Hz, 1.5 min) the samples to a powder form by using a grinder (MM400, Retsch, Haan, Germany). Next, we weighed 50 mg of sample powder using an electronic balance (MS105DΜ) and added 1200 μL of −20 °C pre-cooled 70% methanolic aqueous internal standard extract (less than 50 mg added at the rate of 1200 μL extractant per 50 mg sample) . Vortexing was performed every 30 min for 30 s for a total of 6 times. After centrifugation (12,000 rpm, 3 min), the supernatant was withdrawn, and the sample was filtered with a microporous membrane with a pore size of 0.22 μm and stored in an injection vial for LC-MS/MS (liquid chromatography–tandem mass spectrometry) analysis. Analysis of sample extracts was performed using an LC-MS/MS system and a tandem mass spectrometer. Metabolite profiling data were obtained from different samples, and the chromatographic peaks of all substances were integrated and corrected. Metabolites with FC (fold change) ≥ 2 and FC ≤ 0.5 were selected as differential metabolites. 4.3. cDNA Library Construction and Transcriptomics Data Analysis Transcriptomics experiments were conducted strictly according to the previous experimental instructions . The total RNA was extracted from the samples using the CTAB method. The total RNA samples were treated by using an rRNA removal method, and the rRNA was hybridized with a DNA probe. The hybrid DNA/RNA chain was digested by RNaseH, the DNA probe was digested by DNaseI and the RNA was purified. The resulting RNA was fragmented by using a disruption buffer, reverse-transcribed with randomized N6 primers, and then synthesized into cDNA duplexes, forming synthetic double-stranded DNA that was sprawlingly phosphorylated at the 5′ end, formed a sticky end with a prominent “a” at the 3′ end, and attached to a bulge-like junction with a prominent “T” at the 3′ end. The ligated product was amplified by PCR with specific primers; the PCR product was heat-denatured to a single strand, and then the single-stranded DNA was cyclized with a bridge primer to obtain a single-stranded circular DNA library, which was finally sequenced on the machine . The transcriptome sequencing in this study was performed using the MGISEQ-2000 platform of BGI Genomics Co., Ltd. (Shenzhen, China) and a total of 58 Gb was sequenced, and 93,034 unigenes were obtained after assembly and de-redundancy. To ensure the reliability of the results, we relied on the filtering software SOAPnuke (v1.4.0) and used Bowtie2 (v2.2.5) to compare the clean reads to the reference gene sequences to obtain the comparison results. The gene and transcript expression levels were estimated using RSEM (v 1.2.8) . To identify differentially expressed genes following experimental treatments, the expression levels of each transcript were measured using the FPKM (Fragments Per Kilobase of transcript per Million mapped reads) method. Subsequently, the DESeq2 method, based on the negative binomial distribution, was employed to analyze differentially expressed genes in response to the experimental treatments. Genes were identified as differentially expressed when they exhibited an FC (fold change) greater than 1 and q-value (adjusted p -value) < 0.05. Functional annotation and classification of Unigenes were conducted using publicly available database resources, including KEGG, Pfam and SwissProt. The annotation process involved sequence alignment of Unigene datasets against the reference sequences in these databases using BLAST-based comparison algorithms. 4.4. Chlorophyll Fluorescence Measurements The CK and UV-B groups were dark-treated for 20 min before measurement, and finally, chlorophyll fluorescence parameters were obtained using IMAGING PAM m-series (Walz, Effeltrich, Germany) . 4.5. Rapid Fluorescence Detection Our method was in compliance with previous experimental descriptions . In this experiment, the operation steps of the Handy-PEA instrument were as follows: We clamped the leaf clip on the front of the leaf and allowed for dark adaptation for 20 min. In an attempt to minimize the effect of leaf heterogeneity on the detection results, eight detection points were selected for each leaf: four detection points were selected for each leaf along both sides of the main leaf veins, and four detection points were equally spaced from the leaf tip to the leaf base. After dark adaptation, we connected the instrument probe to the blade clamp on the blade and opened the slide switch to expose the measuring hole to the laser light source. According to the pre-set LED light source of 3000 μmol m −2 s −1 , the detection time was 1s for fast fluorescence signal acquisition. Finally, the fluorescence signals of 8 detection points on each leaf were averaged and used as the final fast fluorescence data of the sample. 4.6. Determination of Chlorophyll Content A total of 1.0 g of leaves of R. chrysanthum from the CK and UV-B groups was weighed, ground repeatedly in 80% acetone solution and filtered, and the filtrate was collected. The absorbance of the extracts at 663 nm and 645 nm was determined by using a colorimetric method. This process was repeated three times. The mass concentrations of chlorophyll a and chlorophyll b in the extracts were calculated according to the formula below, and then the chlorophyll content in the leaves was calculated and expressed as the mass of chlorophyll contained in each gram of fresh weight of leaf tissue (mg/g). The formula was calculated as follows, where ρ is the mass concentration of chlorophyll mg/L, calculated by the formula below; V is the total volume of the sample extract in mL; and m is the sample mass. ρ a = 12.72 A 663 − 2.59 A 645 ρ b = 22.88 A 645 − 4.67 A 663 ρ c h l o r o p h y l l c o n t e n t = ρ a + ρ b c h l o r o p h y l l c o n t e n t = ρ ∗ V / m ∗ 1000 m g / g 4.7. Statistical Analysis The trials were executed on three occasions, employing a fully randomized design, and the data were analyzed utilizing IBM SPSS Statistics version 26. A one-way ANOVA was utilized to evaluate the statistical significance of the findings. Upon identifying significant disparities, Duncan’s multiple range test was implemented to pinpoint the specific mean differences at a significance threshold of p < 0.05. Genomic expression data were processed through the interactive data mining system named Dr.Tom . The relationships between plant hormones and differentially expressed genes (DEGs) derived from RNA sequencing were examined using Pearson’s correlation coefficient, with a stringent correlation threshold set at 0.9 and a significance level of p < 0.05. The dataset obtained from the above experiments was previously used to draw conclusions on other issues regarding the response of R. chrysathum to UV-B . R. chrysanthum was preserved in an artificial climate room at 18 °C (14 h light)/16 °C (10 h dark) under white fluorescent light at 50 umol (photon) m −2 s −1 . The tissue culture seedlings of R. chrysanthum , which had the same growth state for 8 months, were selected as the research material. The experimental materials were divided into experimental (UV-B group) and control groups (CK group). The CK and UV-B groups were transplanted into 1/4 MS medium. The test group was irradiated under UV-B treatment for 2 days, 8 h per day. The control group was irradiated under PAR (photosynthetically active radiation) treatment for two days, eight hours per day . To eliminate inter-individual differences, three biological replicates were performed in this study. The PAR irradiation treatment involved the placement of a 400 nm photofilm (Edmund, Filter Long 2IN SQ, Barrington, NJ, USA). The UV-B radiation treatment involved the measurement of filters (Edmund, Filter Long 2IN SQ, Barrington, NJ, USA). The artificial UV-B radiation source was a UV-B TL 20W/01RS fluorescent tube (Philips, UltravioletB TL 20 W/01 RS, Amsterdam, The Netherlands). The effective received irradiance of the samples was UV-B: 2.3 W/m2, PAR: 50 umol/ (m 2 ·s). We strictly followed the experimental steps of a previous study . Using vacuum freeze-drying technology, we placed the biological samples in a lyophilizer (Scientz-100F, Ningbo Scientz Biotechnology Co., Ltd., Ningbo, China), then ground (30 Hz, 1.5 min) the samples to a powder form by using a grinder (MM400, Retsch, Haan, Germany). Next, we weighed 50 mg of sample powder using an electronic balance (MS105DΜ) and added 1200 μL of −20 °C pre-cooled 70% methanolic aqueous internal standard extract (less than 50 mg added at the rate of 1200 μL extractant per 50 mg sample) . Vortexing was performed every 30 min for 30 s for a total of 6 times. After centrifugation (12,000 rpm, 3 min), the supernatant was withdrawn, and the sample was filtered with a microporous membrane with a pore size of 0.22 μm and stored in an injection vial for LC-MS/MS (liquid chromatography–tandem mass spectrometry) analysis. Analysis of sample extracts was performed using an LC-MS/MS system and a tandem mass spectrometer. Metabolite profiling data were obtained from different samples, and the chromatographic peaks of all substances were integrated and corrected. Metabolites with FC (fold change) ≥ 2 and FC ≤ 0.5 were selected as differential metabolites. Transcriptomics experiments were conducted strictly according to the previous experimental instructions . The total RNA was extracted from the samples using the CTAB method. The total RNA samples were treated by using an rRNA removal method, and the rRNA was hybridized with a DNA probe. The hybrid DNA/RNA chain was digested by RNaseH, the DNA probe was digested by DNaseI and the RNA was purified. The resulting RNA was fragmented by using a disruption buffer, reverse-transcribed with randomized N6 primers, and then synthesized into cDNA duplexes, forming synthetic double-stranded DNA that was sprawlingly phosphorylated at the 5′ end, formed a sticky end with a prominent “a” at the 3′ end, and attached to a bulge-like junction with a prominent “T” at the 3′ end. The ligated product was amplified by PCR with specific primers; the PCR product was heat-denatured to a single strand, and then the single-stranded DNA was cyclized with a bridge primer to obtain a single-stranded circular DNA library, which was finally sequenced on the machine . The transcriptome sequencing in this study was performed using the MGISEQ-2000 platform of BGI Genomics Co., Ltd. (Shenzhen, China) and a total of 58 Gb was sequenced, and 93,034 unigenes were obtained after assembly and de-redundancy. To ensure the reliability of the results, we relied on the filtering software SOAPnuke (v1.4.0) and used Bowtie2 (v2.2.5) to compare the clean reads to the reference gene sequences to obtain the comparison results. The gene and transcript expression levels were estimated using RSEM (v 1.2.8) . To identify differentially expressed genes following experimental treatments, the expression levels of each transcript were measured using the FPKM (Fragments Per Kilobase of transcript per Million mapped reads) method. Subsequently, the DESeq2 method, based on the negative binomial distribution, was employed to analyze differentially expressed genes in response to the experimental treatments. Genes were identified as differentially expressed when they exhibited an FC (fold change) greater than 1 and q-value (adjusted p -value) < 0.05. Functional annotation and classification of Unigenes were conducted using publicly available database resources, including KEGG, Pfam and SwissProt. The annotation process involved sequence alignment of Unigene datasets against the reference sequences in these databases using BLAST-based comparison algorithms. The CK and UV-B groups were dark-treated for 20 min before measurement, and finally, chlorophyll fluorescence parameters were obtained using IMAGING PAM m-series (Walz, Effeltrich, Germany) . Our method was in compliance with previous experimental descriptions . In this experiment, the operation steps of the Handy-PEA instrument were as follows: We clamped the leaf clip on the front of the leaf and allowed for dark adaptation for 20 min. In an attempt to minimize the effect of leaf heterogeneity on the detection results, eight detection points were selected for each leaf: four detection points were selected for each leaf along both sides of the main leaf veins, and four detection points were equally spaced from the leaf tip to the leaf base. After dark adaptation, we connected the instrument probe to the blade clamp on the blade and opened the slide switch to expose the measuring hole to the laser light source. According to the pre-set LED light source of 3000 μmol m −2 s −1 , the detection time was 1s for fast fluorescence signal acquisition. Finally, the fluorescence signals of 8 detection points on each leaf were averaged and used as the final fast fluorescence data of the sample. A total of 1.0 g of leaves of R. chrysanthum from the CK and UV-B groups was weighed, ground repeatedly in 80% acetone solution and filtered, and the filtrate was collected. The absorbance of the extracts at 663 nm and 645 nm was determined by using a colorimetric method. This process was repeated three times. The mass concentrations of chlorophyll a and chlorophyll b in the extracts were calculated according to the formula below, and then the chlorophyll content in the leaves was calculated and expressed as the mass of chlorophyll contained in each gram of fresh weight of leaf tissue (mg/g). The formula was calculated as follows, where ρ is the mass concentration of chlorophyll mg/L, calculated by the formula below; V is the total volume of the sample extract in mL; and m is the sample mass. ρ a = 12.72 A 663 − 2.59 A 645 ρ b = 22.88 A 645 − 4.67 A 663 ρ c h l o r o p h y l l c o n t e n t = ρ a + ρ b c h l o r o p h y l l c o n t e n t = ρ ∗ V / m ∗ 1000 m g / g The trials were executed on three occasions, employing a fully randomized design, and the data were analyzed utilizing IBM SPSS Statistics version 26. A one-way ANOVA was utilized to evaluate the statistical significance of the findings. Upon identifying significant disparities, Duncan’s multiple range test was implemented to pinpoint the specific mean differences at a significance threshold of p < 0.05. Genomic expression data were processed through the interactive data mining system named Dr.Tom . The relationships between plant hormones and differentially expressed genes (DEGs) derived from RNA sequencing were examined using Pearson’s correlation coefficient, with a stringent correlation threshold set at 0.9 and a significance level of p < 0.05. The dataset obtained from the above experiments was previously used to draw conclusions on other issues regarding the response of R. chrysathum to UV-B . The present study provides a comprehensive analysis of phytohormonal regulation in R. chrysanthum under UV-B radiation, revealing the complex interplay between hormonal signaling and metabolic responses in plant stress tolerance. Our findings indicate that UV-B radiation significantly impacts the photosynthetic capacity of R. chrysanthum , leading to a decline in photosynthesis efficiency and damage to the photosynthetic system. This is accompanied by alterations in the levels of various phytohormones, their biosynthesis, and the expression of related signaling genes, which collectively contribute to the plant’s defense mechanisms against UV-B stress . Additionally, while UV-B radiation primarily imposes stress on the plant, our study also hints at the potential for UV-B exposure to induce adaptive responses that may enhance certain growth-related processes in R. chrysanthum . This aligns with the concept of radiation hormesis, where radiation can stimulate plant growth through the activation of specific hormonal pathways and defense mechanisms. These findings not only expand our understanding of hormonal regulatory mechanisms in plant resistance to UV-B radiation but also lay the groundwork for developing strategies to enhance plant tolerance to adverse environmental stresses.
Large scale crowdsourced radiotherapy segmentations across a variety of cancer anatomic sites
00331ec6-e91d-443d-aa92-dc4ad12b62b9
10033824
Internal Medicine[mh]
Since the advent of contemporary radiation delivery techniques for cancer treatment, clinician generated segmentation (also termed contouring or delineation) of target structures (e.g., primary tumors and metastatic lymph nodes) and organs at risk (e.g., healthy tissues whose irradiation could lead to damage and/or side effects) on medical images has become a necessity in the radiotherapy workflow . These segmentations are typically provided by trained medical professionals, such as radiation oncologists. While segmentations can be performed on any imaging modality that provides sufficient discriminative capabilities to visualize regions of interest (ROIs), the current radiotherapy workflow prioritizes the use of computed tomography (CT) for ROI segmentation due to its ubiquitous nature and use in radiotherapy dose calculations. Subsequently, clinicians spend a large fraction of their time and effort generating ROI segmentations on CT imaging necessary for the radiotherapy workflow. Interobserver and intraobserver variability are well-documented byproducts of the use of manual human-generated segmentations , . While consensus radiotherapy guidelines to ensure ROI segmentation quality have been developed and shown to reduce variability , these guidelines are not necessarily followed by all practicing clinicians. Therefore, segmentation variability remains a significant concern in maintaining radiotherapy plan quality and consistency. Recent computational improvements in machine learning, particularly deep learning, have prompted the increasing development and deployment of accurate ROI auto-segmentation algorithms to reduce radiotherapy segmentation variability – . However, for auto-segmentation algorithms to be clinically useful, their input data (training data) should reflect high-quality “gold-standard” annotations. While research has been performed on the impact of interobserver variability and segmentation quality for auto-segmentation training – , it remains unclear how “gold-standard” segmentations should be defined and generated. One common approach, consensus segmentation generation, seeks to crowdsource multiple segmentations from different annotators to generate a high-quality ground-truth segmentation. While multi-observer public medical imaging segmentation datasets exist – , there remains a lack of datasets with a large number of annotators for radiotherapy applications. The Contouring Collaborative for Consensus in Radiation Oncology (C3RO) challenge was developed to engage radiation oncologists across various expertise levels in cloud-based ROI crowdsourced segmentation . Through this collaboration, a large number of clinicians generated ROI segmentations using CT images from 5 unique radiotherapy cases: breast, sarcoma, head and neck, gynecologic, and gastrointestinal. In this data descriptor, we present the curation and processing of the data from the C3RO challenge. The primary contribution of this dataset is unprecedented large-scale multi-annotator individual and consensus segmentations of various ROIs crucial for radiotherapy planning in an easily accessible and standardized imaging format. These data can be leveraged for exploratory analysis of segmentation quality across a large number of annotators, consensus segmentation experiments, and auto-segmentation model benchmarking. An overview of this data descriptor is shown in Fig. . Patient population Five separate patients who had undergone radiotherapy were retrospectively collected from our collaborators at various institutions. Each patient had received a pathologically confirmed diagnosis of cancer of one of the following sites: breast (post-mastectomy intraductal carcinoma), sarcoma (malignant peripheral nerve sheath tumor of the left thigh), head and neck (oropharynx with nodal spread, [H&N]), gynecologic (cervical cancer, [GYN]), and gastrointestinal (anal cancer, [GI]). Clinical characteristics of these patients are shown in Table . Of note, these five disease sites were included as part of the C3RO challenge due to being among the most common disease sites treated by radiation oncologists; additional disease sites were planned but were not realized due to diminishing community participation in C3RO. Specific patient cases were selected by C3RO collaborators on the basis of being adequate reflections of routine patients a generalist radiation oncologist may see in a typical workflow (i.e., not overly complex). Further details on the study design for C3RO can be found in Lin & Wahid et al . . Imaging protocols Each patient received a radiotherapy planning CT scan which was exported in Digital Imaging and Communications in Medicine (DICOM) format. CT image acquisition characteristics are shown in Table . All images were acquired on scanners that were routinely used for radiotherapy planning at their corresponding institutions with appropriate calibration and quality assurance by technical personnel. The sarcoma, H&N, and GI cases received intravenous contrast, the GU case received oral contrast, and the breast case did not receive any contrast. Of note, the H&N case had metal streak artifacts secondary to metallic implants in the upper teeth, which obscured anatomy near the mandible. No other cases contained noticeable image artifacts. Notably, the sarcoma case also received a magnetic resonance imaging (MRI) scan, while the H&N and GI cases received full body positron emission tomography (PET) scans. The sarcoma MRI scan was acquired on a GE Signa HDxt device and corresponded to a post-contrast spin echo T1-weighted image with a slice thickness of 3.0 mm and in-plane resolution of 0.35 mm. The H&N PET scan was acquired on a GE Discovery 600 device with a slice thickness of 3.3 mm and in-plane resolution of 2.73 mm. The GI PET scan was acquired on a GE Discovery STE device with a slice thickness of 3.3 mm and in-plane resolution of 5.47 mm. IRB exemption and data storage The retrospective acquisition, storage, and use of these DICOM files have been reviewed by the Memorial Sloan Kettering (MSK) Human Research Protection Program (HRPP) Office on May 26, 2021 and were determined to be exempt research as per 45 CFR 46.104(d)(3),(i)(a), (ii) and (iii), (i)(b),(ii) and (iii), (i)(c), (ii) and (iii) and 45.CFR.46.111(a)(7). A limited IRB review of the protocol X19-040 A(1) was conducted via expedited process in accordance with 45 CFR 46.110(b), and the protocol was approved on May 26, 2021. DICOM files were obtained and stored on MIMcloud (MIM Software Inc., Ohio, USA), which is a HIPAA-compliant cloud-based storage for DICOM image files that has been approved for use at MSK by MSK’s Information Security team. DICOM anonymization For each image, the DICOM header tags containing the patient name, date of birth, and patient identifier number were consistently removed from all DICOM files using DicomBrowser v. 1.5.2 . The removal of acquisition data and time metadata (if available in DICOM header tags) caused compatibility issues with ProKnow so were kept as is. Moreover, if institution name or provider name were available in the DICOM file, they were not removed as they were not considered protected health information. Select cases (breast, GYN, GI) were previously anonymized using the DICOM Import Export tool (Varian Medical Systems, CA, USA). Participant details To register for the challenge, participants completed a baseline questionnaire that included their name, email address, affiliated institution, country, specialization, years in practice, number of disease sites treated, volume of patients treated per month for the designated tumor site, how they learned about this challenge, and reasons for participation. Registrant intake information was collected through the Research Electronic Data Capture (REDCap) system - a widely used web application for managing survey databases ; an example of the intake form can be found at: https://redcap.mskcc.org/surveys/?s=98ARPWCMAT . The research conducted herein was approved by the HRRP at MSK (IRB#: X19-040 A(1); approval date: May 26, 2021). All subjects prospectively consented to participation in the present study, as well as to the collection, use, and disclosure of de-identified aggregate subject information and responses. Participants were categorized as recognized experts or non-experts. Recognized experts were identified by our C3RO team (EFG, CDF, DL) based on participation in the development of national guidelines or other extensive scholarly activities. Recognized experts were board-certified physicians with expertise in the specific disease site. Non-experts were any participants not categorized as an expert for that disease site. All non-experts had some knowledge of human anatomy, with the majority being composed of practicing radiation oncologists but also included resident physicians, radiation therapists, and medical physicists. Worthy of note, a participant could only be considered an expert for one disease site, but could have participated as a non-expert for other disease sites. Out of 1,026 registrants, 221 participated in generating segmentations, which were used for this dataset; due to the low participation rate, participants may represent a biased sample of registrants. Of note, participants could provide segmentations for multiple cases. Additional demographic characteristics of the participants can be found in Lin & Wahid et al . . ProKnow segmentation platform Participants were given access to the C3RO workspace on ProKnow (Elekta AB, Stockholm, Sweden). ProKnow is a commercially available radiotherapy clinical workflow tool that allows for centralization of data in a secure web-based repository; the ProKnow system has been adopted by several large scale medical institutions and is used routinely in clinical and research environments. Anonymized CT DICOM images for each case were imported into the ProKnow system for participants to segment; anonymized MRI and PET images were also imported for select cases as available. Each case was attributed a short text prompt describing the patient presentation along with any additional information as needed. Participants were allowed to utilize common image manipulation (scrolling capabilities, zooming capabilities, window leveling, etc.) and segmentation (fill, erase, etc.) tools for generating their segmentations. No auto-segmentation capabilities were provided to the participants, i.e., all segmentations were manually generated. Notably, for the sarcoma case, an external mask of the patient’s body and a mask of the left femur was provided to participants. Screenshots of the ProKnow web interface platform for the various cases are shown in Fig. . Segmentation details For each case, participants were requested to segment a select number of ROIs corresponding to target structures or OARs. Notably, not all participants generated segmentations for all ROIs. ROIs for each participant were combined into one structure set in the ProKnow system. ROIs were initially named in a consistent, but non-standardized format, so during file conversion ROIs were renamed based on The Report of American Association of Physicists in Medicine Task Group 263 (TG-263) suggested nomenclature ; TG-263 was chosen due its ubiquity in standardized radiotherapy nomenclature. A list of the ROIs and the number of available segmentations stratified by participant expertise level is shown in Supplementary Table . Image processing and file conversion For each case, anonymized CT images and structure sets for each annotator were manually exported from ProKnow in DICOM and DICOM radiotherapy structure (RTS) format, respectively. The Neuroimaging Informatics Technology Initiative (NIfTI) format is increasingly used for reproducible imaging research – due to its compact file size and ease of implementation in computational models . Therefore, in order to increase the interoperability of these data, we converted all our DICOM imaging and segmentation data to NIfTI format. For all file conversion processes, Python v. 3.8.8 was used. An overview of the image processing workflow is shown in Fig. . In brief, using an in-house Python script, DICOM images and structure sets were loaded into numpy array format using the DICOMRTTool v. 0.4.2 library , and then converted to NIfTI format using SimpleITK v. 2.1.1 . For each annotator, each individual structure contained in the structure set was separately converted into a binary mask (0 = background, 1 = ROI), and was then converted into separate NIfTI files. Notably, voxels fully inside and outside the contour are included and not include in the binary mask, respectively, while voxels that overlapped the segmentation (edge voxels) were counted as surface coordinates and included in the binary mask; additional details on array conversion can be found in the DICOMRTTool documentation . Examples of random subsets of five expert segmentations for each ROI from each case are shown in Fig. . Consensus segmentation generation In addition to ground-truth expert and non-expert segmentations for all ROIs, we also generated consensus segmentations using the Simultaneous Truth and Performance Level Estimation (STAPLE) method, a commonly used probabilistic approach for combining multiple segmentations – . Briefly, the STAPLE method uses an iterative expectation-maximization algorithm to compute a probabilistic estimate of the “true” segmentation by deducing an optimal combination of the input segmentations and incorporating a prior model for the spatial distribution of segmentations as well as implementing spatial homogeneity constraints . For our specific implementation of the STAPLE method, we utilized the SimpleITK STAPLE function with a default threshold value of 0.95. For each ROI, all available binary segmentation masks acted as inputs to the STAPLE function for each expertise level, subsequently generating binary STAPLE segmentation masks for each expertise level (i.e., STAPLE expert and STAPLE non-expert ). An overview of the consensus segmentation workflow is shown in Fig. . Examples of STAPLE expert and STAPLE non-expert segmentations for each ROI are shown in Fig. . Five separate patients who had undergone radiotherapy were retrospectively collected from our collaborators at various institutions. Each patient had received a pathologically confirmed diagnosis of cancer of one of the following sites: breast (post-mastectomy intraductal carcinoma), sarcoma (malignant peripheral nerve sheath tumor of the left thigh), head and neck (oropharynx with nodal spread, [H&N]), gynecologic (cervical cancer, [GYN]), and gastrointestinal (anal cancer, [GI]). Clinical characteristics of these patients are shown in Table . Of note, these five disease sites were included as part of the C3RO challenge due to being among the most common disease sites treated by radiation oncologists; additional disease sites were planned but were not realized due to diminishing community participation in C3RO. Specific patient cases were selected by C3RO collaborators on the basis of being adequate reflections of routine patients a generalist radiation oncologist may see in a typical workflow (i.e., not overly complex). Further details on the study design for C3RO can be found in Lin & Wahid et al . . Each patient received a radiotherapy planning CT scan which was exported in Digital Imaging and Communications in Medicine (DICOM) format. CT image acquisition characteristics are shown in Table . All images were acquired on scanners that were routinely used for radiotherapy planning at their corresponding institutions with appropriate calibration and quality assurance by technical personnel. The sarcoma, H&N, and GI cases received intravenous contrast, the GU case received oral contrast, and the breast case did not receive any contrast. Of note, the H&N case had metal streak artifacts secondary to metallic implants in the upper teeth, which obscured anatomy near the mandible. No other cases contained noticeable image artifacts. Notably, the sarcoma case also received a magnetic resonance imaging (MRI) scan, while the H&N and GI cases received full body positron emission tomography (PET) scans. The sarcoma MRI scan was acquired on a GE Signa HDxt device and corresponded to a post-contrast spin echo T1-weighted image with a slice thickness of 3.0 mm and in-plane resolution of 0.35 mm. The H&N PET scan was acquired on a GE Discovery 600 device with a slice thickness of 3.3 mm and in-plane resolution of 2.73 mm. The GI PET scan was acquired on a GE Discovery STE device with a slice thickness of 3.3 mm and in-plane resolution of 5.47 mm. The retrospective acquisition, storage, and use of these DICOM files have been reviewed by the Memorial Sloan Kettering (MSK) Human Research Protection Program (HRPP) Office on May 26, 2021 and were determined to be exempt research as per 45 CFR 46.104(d)(3),(i)(a), (ii) and (iii), (i)(b),(ii) and (iii), (i)(c), (ii) and (iii) and 45.CFR.46.111(a)(7). A limited IRB review of the protocol X19-040 A(1) was conducted via expedited process in accordance with 45 CFR 46.110(b), and the protocol was approved on May 26, 2021. DICOM files were obtained and stored on MIMcloud (MIM Software Inc., Ohio, USA), which is a HIPAA-compliant cloud-based storage for DICOM image files that has been approved for use at MSK by MSK’s Information Security team. For each image, the DICOM header tags containing the patient name, date of birth, and patient identifier number were consistently removed from all DICOM files using DicomBrowser v. 1.5.2 . The removal of acquisition data and time metadata (if available in DICOM header tags) caused compatibility issues with ProKnow so were kept as is. Moreover, if institution name or provider name were available in the DICOM file, they were not removed as they were not considered protected health information. Select cases (breast, GYN, GI) were previously anonymized using the DICOM Import Export tool (Varian Medical Systems, CA, USA). To register for the challenge, participants completed a baseline questionnaire that included their name, email address, affiliated institution, country, specialization, years in practice, number of disease sites treated, volume of patients treated per month for the designated tumor site, how they learned about this challenge, and reasons for participation. Registrant intake information was collected through the Research Electronic Data Capture (REDCap) system - a widely used web application for managing survey databases ; an example of the intake form can be found at: https://redcap.mskcc.org/surveys/?s=98ARPWCMAT . The research conducted herein was approved by the HRRP at MSK (IRB#: X19-040 A(1); approval date: May 26, 2021). All subjects prospectively consented to participation in the present study, as well as to the collection, use, and disclosure of de-identified aggregate subject information and responses. Participants were categorized as recognized experts or non-experts. Recognized experts were identified by our C3RO team (EFG, CDF, DL) based on participation in the development of national guidelines or other extensive scholarly activities. Recognized experts were board-certified physicians with expertise in the specific disease site. Non-experts were any participants not categorized as an expert for that disease site. All non-experts had some knowledge of human anatomy, with the majority being composed of practicing radiation oncologists but also included resident physicians, radiation therapists, and medical physicists. Worthy of note, a participant could only be considered an expert for one disease site, but could have participated as a non-expert for other disease sites. Out of 1,026 registrants, 221 participated in generating segmentations, which were used for this dataset; due to the low participation rate, participants may represent a biased sample of registrants. Of note, participants could provide segmentations for multiple cases. Additional demographic characteristics of the participants can be found in Lin & Wahid et al . . Participants were given access to the C3RO workspace on ProKnow (Elekta AB, Stockholm, Sweden). ProKnow is a commercially available radiotherapy clinical workflow tool that allows for centralization of data in a secure web-based repository; the ProKnow system has been adopted by several large scale medical institutions and is used routinely in clinical and research environments. Anonymized CT DICOM images for each case were imported into the ProKnow system for participants to segment; anonymized MRI and PET images were also imported for select cases as available. Each case was attributed a short text prompt describing the patient presentation along with any additional information as needed. Participants were allowed to utilize common image manipulation (scrolling capabilities, zooming capabilities, window leveling, etc.) and segmentation (fill, erase, etc.) tools for generating their segmentations. No auto-segmentation capabilities were provided to the participants, i.e., all segmentations were manually generated. Notably, for the sarcoma case, an external mask of the patient’s body and a mask of the left femur was provided to participants. Screenshots of the ProKnow web interface platform for the various cases are shown in Fig. . For each case, participants were requested to segment a select number of ROIs corresponding to target structures or OARs. Notably, not all participants generated segmentations for all ROIs. ROIs for each participant were combined into one structure set in the ProKnow system. ROIs were initially named in a consistent, but non-standardized format, so during file conversion ROIs were renamed based on The Report of American Association of Physicists in Medicine Task Group 263 (TG-263) suggested nomenclature ; TG-263 was chosen due its ubiquity in standardized radiotherapy nomenclature. A list of the ROIs and the number of available segmentations stratified by participant expertise level is shown in Supplementary Table . For each case, anonymized CT images and structure sets for each annotator were manually exported from ProKnow in DICOM and DICOM radiotherapy structure (RTS) format, respectively. The Neuroimaging Informatics Technology Initiative (NIfTI) format is increasingly used for reproducible imaging research – due to its compact file size and ease of implementation in computational models . Therefore, in order to increase the interoperability of these data, we converted all our DICOM imaging and segmentation data to NIfTI format. For all file conversion processes, Python v. 3.8.8 was used. An overview of the image processing workflow is shown in Fig. . In brief, using an in-house Python script, DICOM images and structure sets were loaded into numpy array format using the DICOMRTTool v. 0.4.2 library , and then converted to NIfTI format using SimpleITK v. 2.1.1 . For each annotator, each individual structure contained in the structure set was separately converted into a binary mask (0 = background, 1 = ROI), and was then converted into separate NIfTI files. Notably, voxels fully inside and outside the contour are included and not include in the binary mask, respectively, while voxels that overlapped the segmentation (edge voxels) were counted as surface coordinates and included in the binary mask; additional details on array conversion can be found in the DICOMRTTool documentation . Examples of random subsets of five expert segmentations for each ROI from each case are shown in Fig. . In addition to ground-truth expert and non-expert segmentations for all ROIs, we also generated consensus segmentations using the Simultaneous Truth and Performance Level Estimation (STAPLE) method, a commonly used probabilistic approach for combining multiple segmentations – . Briefly, the STAPLE method uses an iterative expectation-maximization algorithm to compute a probabilistic estimate of the “true” segmentation by deducing an optimal combination of the input segmentations and incorporating a prior model for the spatial distribution of segmentations as well as implementing spatial homogeneity constraints . For our specific implementation of the STAPLE method, we utilized the SimpleITK STAPLE function with a default threshold value of 0.95. For each ROI, all available binary segmentation masks acted as inputs to the STAPLE function for each expertise level, subsequently generating binary STAPLE segmentation masks for each expertise level (i.e., STAPLE expert and STAPLE non-expert ). An overview of the consensus segmentation workflow is shown in Fig. . Examples of STAPLE expert and STAPLE non-expert segmentations for each ROI are shown in Fig. . Medical images and multi-annotator segmentation data This data collection primarily consists of 1985 3D volumetric compressed NIfTI files (.nii.gz file extension) corresponding to CT images and segmentations of ROIs from various disease sites (breast, sarcoma, H&N, GYN, GI). Analogously formatted MRI and PET images are available for select cases (sarcoma, H&N, GI). ROI segmentation NIfTI files are provided in binary mask format (0 = background, 1 = ROI); file names for each ROI are provided in TG-263 notation. All medical images and ROI segmentations were derived from original DICOM and DICOM RTS files (.dcm file extension) respectively, which for completeness are also provided in this data collection. In addition, Python code to recreate the final NIfTI files from DICOM files is also provided in the corresponding GitHub repository (see Code Availability section). Consensus segmentation data Consensus segmentations for experts and non-experts generated using the STAPLE method for each ROI have also been provided in compressed NIfTI file format (.nii.gz file extension). Consensus segmentation NIfTI files are provided in binary mask format (0 = background, 1 = ROI consensus). Python code to recreate the STAPLE NIfTI files from input annotator NIfTI files is also provided in the corresponding GitHub repository (see Code Availability section). Annotator demographics data We also provide a single Microsoft Excel file (.xlsx file extension) containing each annotator’s gender, race/ethnicity, geographic setting, profession, years of experience, practice type, and categorized expertise level (expert, non-expert). Geographic setting was re-coded as “United States” or “International” to further de-identify the data. Each separate sheet corresponds to a separate disease site (sheet 1 = breast, sheet 2 = sarcoma, sheet 3 = H&N, sheet 4 = GU, sheet 5 = GI). Moreover, in order to foster secondary analysis of registrant data, we also include a sheet containing the combined intake data for all registrants of C3RO, including those who did not provide annotations (sheet 6). Folder structure and identifiers Each disease site is represented by a top-level folder, containing a subfolder for images and segmentations. The annotator demographic excel file is located in the same top-level location as the disease site folders. Image folders contain separate subfolders for NIfTI format and DICOM format images. Segmentation folders contain separate subfolders for expert and non-expert segmentations. Each expertise folder contains separate subfolders for each annotator (which contains separate subfolders for DICOM and NIfTI formatted files) and the consensus segmentation (only available in NIfTI format). The data have been specifically structured such that for any object (i.e., an image or segmentation), DICOM and NIfTI subdirectories are available for facile partitioning of data file types. An overview of the organized data records for an example case is shown in Fig. . Segmentation files (DICOM and NIfTI) are organized by anonymized participant ID numbers and can be cross referenced against the excel data table using this identifier. The raw data, records, and supplemental descriptions of the meta-data files are cited under Figshare doi: 10.6084/m9.figshare.21074182 . This data collection primarily consists of 1985 3D volumetric compressed NIfTI files (.nii.gz file extension) corresponding to CT images and segmentations of ROIs from various disease sites (breast, sarcoma, H&N, GYN, GI). Analogously formatted MRI and PET images are available for select cases (sarcoma, H&N, GI). ROI segmentation NIfTI files are provided in binary mask format (0 = background, 1 = ROI); file names for each ROI are provided in TG-263 notation. All medical images and ROI segmentations were derived from original DICOM and DICOM RTS files (.dcm file extension) respectively, which for completeness are also provided in this data collection. In addition, Python code to recreate the final NIfTI files from DICOM files is also provided in the corresponding GitHub repository (see Code Availability section). Consensus segmentations for experts and non-experts generated using the STAPLE method for each ROI have also been provided in compressed NIfTI file format (.nii.gz file extension). Consensus segmentation NIfTI files are provided in binary mask format (0 = background, 1 = ROI consensus). Python code to recreate the STAPLE NIfTI files from input annotator NIfTI files is also provided in the corresponding GitHub repository (see Code Availability section). We also provide a single Microsoft Excel file (.xlsx file extension) containing each annotator’s gender, race/ethnicity, geographic setting, profession, years of experience, practice type, and categorized expertise level (expert, non-expert). Geographic setting was re-coded as “United States” or “International” to further de-identify the data. Each separate sheet corresponds to a separate disease site (sheet 1 = breast, sheet 2 = sarcoma, sheet 3 = H&N, sheet 4 = GU, sheet 5 = GI). Moreover, in order to foster secondary analysis of registrant data, we also include a sheet containing the combined intake data for all registrants of C3RO, including those who did not provide annotations (sheet 6). Each disease site is represented by a top-level folder, containing a subfolder for images and segmentations. The annotator demographic excel file is located in the same top-level location as the disease site folders. Image folders contain separate subfolders for NIfTI format and DICOM format images. Segmentation folders contain separate subfolders for expert and non-expert segmentations. Each expertise folder contains separate subfolders for each annotator (which contains separate subfolders for DICOM and NIfTI formatted files) and the consensus segmentation (only available in NIfTI format). The data have been specifically structured such that for any object (i.e., an image or segmentation), DICOM and NIfTI subdirectories are available for facile partitioning of data file types. An overview of the organized data records for an example case is shown in Fig. . Segmentation files (DICOM and NIfTI) are organized by anonymized participant ID numbers and can be cross referenced against the excel data table using this identifier. The raw data, records, and supplemental descriptions of the meta-data files are cited under Figshare doi: 10.6084/m9.figshare.21074182 . Data annotations Segmentation DICOM and NIfTI files were manually verified by study authors (D.L., K.A.W., O.S.) to be annotated with the appropriate corresponding ROI names. Segmentation interobserver variability We calculated the pairwise interobserver variability (IOV) for each ROI for each disease site across experts and non-experts. Specifically, for each metric all pairwise combinations between all available segmentations in a given group (expert or non-expert) were calculated; median and interquartile range values are reported in Supplementary Table . Calculated metrics included the Dice Similarity coefficient (DSC), average surface distance (ASD), and surface DSC (SDSC). SDSC was calculated based on ROI specific thresholds determined by the median pairwise mean surface distance of all expert segmentations for that ROI as suggested in literature . Metrics were calculated using the Surface Distances Python package , and in-house Python code. For specific equations for metric calculations please see corresponding Surface Distances Python package documentation . Resultant values are broadly consistent with previous work in breast , sarcoma , H&N , , , GYN , and GI – IOV studies. Segmentation DICOM and NIfTI files were manually verified by study authors (D.L., K.A.W., O.S.) to be annotated with the appropriate corresponding ROI names. We calculated the pairwise interobserver variability (IOV) for each ROI for each disease site across experts and non-experts. Specifically, for each metric all pairwise combinations between all available segmentations in a given group (expert or non-expert) were calculated; median and interquartile range values are reported in Supplementary Table . Calculated metrics included the Dice Similarity coefficient (DSC), average surface distance (ASD), and surface DSC (SDSC). SDSC was calculated based on ROI specific thresholds determined by the median pairwise mean surface distance of all expert segmentations for that ROI as suggested in literature . Metrics were calculated using the Surface Distances Python package , and in-house Python code. For specific equations for metric calculations please see corresponding Surface Distances Python package documentation . Resultant values are broadly consistent with previous work in breast , sarcoma , H&N , , , GYN , and GI – IOV studies. The image and segmentation data from this data collection are provided in original DICOM format (where applicable) and compressed NIfTI format with the accompanying excel file containing demographic information indexed by annotator identifiers. We invite all interested researchers to download this dataset for use in segmentation, radiotherapy, and crowdsourcing related research. Moreover, we encourage this dataset’s use for clinical decision support tool development. While the individual number of patient cases for this dataset is too small for traditional machine learning development (i.e., deep learning auto-segmentation training), this dataset could act as a benchmark reference for testing existing auto-segmentation algorithms. Importantly, this dataset could also be used as a standardized reference for future interobserver variability studies seeking to investigate further participant expertise criteria, e.g., true novice annotators (no previous segmentation or anatomy knowledge) could attempt to segment ROI structures on CT images, which could then be compared to our expert and non-expert annotators. Finally, in line with the goals of the eContour collaborative , these data could be used to help develop educational tools for radiation oncology clinical training. The segmentations provided in this data descriptor have been utilized in a study by Lin & Wahid et al . . This study demonstrated several results that were consistent with existing literature, including: (1). target ROIs tended to exhibit greater variability than OAR ROIs , (2). H&N ROIs exhibited higher interobserver variability compared to other disease sites , , and (3). non-expert consensus segmentations could approximate gold-standard expert segmentations . Original DICOM format images and structure sets may be viewed and analyzed in radiation treatment planning software or select digital image viewing applications, depending on the end-user’s requirements. Current open-source software for these purposes includes ImageJ , dicompyler , ITK-Snap , and 3D Slicer with the SlicerRT extension . Processed NIfTI format images and segmentations may be viewed and analyzed in any NIfTI viewing application, depending on the end-user’s requirements. Current open-source software for these purposes includes ImageJ , ITK-Snap , and 3D Slicer . Supplementary Table 1 Supplementary Table 2
Role of preoperative transarterial chemoembolization (TACE) in intermediate‐stage hepatocellular carcinoma (Hong Kong liver cancer stage IIB)
adc7a763-4db7-4879-9691-476091754c77
11798681
Surgical Procedures, Operative[mh]
INTRODUCTION The principle of transarterial chemoembolization (TACE) in the treatment of HCC is attributed to the predominant hepatic arterial blood supply of the tumor. TACE has an established role in advanced HCC as recommended in the Barcelona clinic liver cancer stage (BCLC) and Hong Kong liver cancer (HKLC) stages IIIA and IIIB. Surgery is the modality of choice for HKLC IIB. , However, there is a subgroup of patients in stage IIB who may not be suitable for upfront surgical resection due to inadequate/borderline future liver remnant (FLR), multicentric disease, or poor performance status. Some of the potential advantages of using TACE in a preoperative setting include tumor downsizing, detection of multicentricity, prevention of intraoperative tumor dissemination, and assessment of tumor biology. , , , Our previous publication demonstrated the feasibility and utility of TACE in a preoperative setting in a select group of patients. The present study assesses the effects of preoperative TACE on survival in patients with intermediate‐stage HCC (HKLC stage IIB disease). MATERIALS AND METHODS A retrospective analysis of a prospectively maintained database of all patients with HCC who presented to our center between January 2010 and August 2022 was performed. Patients managed with a curative intent with either upfront surgical resection (UPS) or after preoperative TACE (pTACE) were included in the study. The study protocol conformed to the ethical guidelines of the “World medical association declaration of helsinki—Ethical principles for medical research involving human subjects”. Decisions regarding treatment plans were taken in a dedicated multidisciplinary ‘Liver clinic’ comprising Hepato‐pancreato‐biliary (HPB) surgical oncologists, interventional radiologists, hepatologists, medical oncologists, and radiation oncologists. All patients underwent preoperative evaluation including blood investigations, tumor markers [carcinoembryonic antigen (CEA), cancer antigen (CA 19‐9), and alpha fetoprotein (AFP) levels], serology, calculation of modified Child–Turcotte–Pugh (CTP) score, a triple‐phase contrast‐enhanced computed tomography (CECT) or gadolinium‐enhanced magnetic resonance imaging (MRI) of the liver, and a CT thorax for staging. Patients were staged according to BCLC and HKLC staging systems. , , The diagnosis of HCC was made on characteristic radiological features of arterial enhancement and venous phase washout. Equivocal radiological findings warranted a biopsy for confirmation. Assessment of cirrhosis was done on CT/MRI with liver contour irregularities, caudate hypertrophy, and the presence of collaterals. Cirrhotic patients underwent upper gastrointestinal endoscopy to look for stigmata of portal hypertension. Tumor burden score (TBS) was calculated by applying the Pythagorean formula [TBS 2 = (maximum tumor diameter) 2 + (number of tumors) 2 ] on preoperative imaging data. TACE was considered in the following patients with intermediate‐stage HCC as per the institutional protocol [Figure ]. Downsizing the tumor and achieving adequate FLR. If pTACE was insufficient in achieving the desired minimum FLR, then a portal vein embolization (PVE) was done. To rule out multicentricity especially in cirrhotic patients if there was doubt of a tumor nodule/dysplastic nodule (<1 cm) in the contralateral lobe. Patients with comorbidities who need optimization before resection Tumor bleed/rupture Presence of vascular invasion (infiltration or thrombosis) of intrahepatic portal vein and hepatic veins Patients with clinically significant portal hypertension (CSPH) with borderline FLR (<30%) The following patients were excluded: HKLC stages I, IIA, III, and IV Metastatic disease at presentation Recurrent disease Cirrhotic patient with a CTP score ≥8 Main portal vein thrombosis or invasion TACE was performed using a standard femoral approach. Drug‐eluting beads (Bio‐compatibles UK, Surrey, UK) 300–500 μm in size, with a dose of 50–75 mg of doxorubicin, were injected. In some patients, conventional TACE was also done with 50 mg of doxorubicin and 10 mL of lipiodol. Response to therapy was evaluated by contrast‐enhanced CT/MRI using the modified Response Evaluation Criteria in Solid Tumors (mRECIST). After ruling out distant metastasis on abdominal exploration, an intraoperative ultrasound (IOUS) of the liver was performed in all patients to identify previously undetected lesions and to assess the relation of the tumor to major vascular structures. Hypotensive anesthesia and portal triad clamping (Pringle maneuver) were selectively utilized. Parenchymal transection was performed predominantly using a cavitron ultrasonic surgical aspirator (CUSA) along with either water jet, ligasure, or harmonic scalpel as per the surgeon's discretion. Postoperative complications were recorded based on International Study Group of Liver Surgery (ISGLS) criteria and as well as the Clavien–Dindo classification. , , , All the patients were followed up at three monthly intervals regularly for first 2 years and 6 monthly after that. Statistical analyses was performed in an intention‐to‐treat manner to compare UPS to pTACE as a primary treatment in intermediate stage HCC. To minimize bias between the pTACE group and the UPS group, a propensity score matching was used. The clinical variables obtained at the time of initial diagnosis and considered to have influenced the decision concerning the primary treatment were used for the 1:1 matching with match tolerance kept at 0.05. The categorical variables were analyzed using Pearson's χ2 test, whereas continuous variables were analyzed using the Mann–Whitney U test.The primary endpoint of the study was overall survival (OS). OS was defined as the time interval between the start of treatment (i.e., neoadjuvant therapy or surgery) and the last follow‐up or death. Disease‐free survival (DFS) was defined as the time interval between the start of treatment and the first appearance of recurrence after surgery. Survival curves were plotted using the Kaplan–Meier method and were analyzed using the log‐rank test. Multivariate Cox regression analysis was performed to evaluate factors affecting OS. A p ‐value of less than 0.05 was considered statistically significant. Statistical analyses were performed using the Statistical Product and Service Solutions (SPSS), version 25.0, for Windows (SPSS Inc., Chicago, IL, USA). RESULTS A total of 1168 patients were evaluated for the study as shown in Figure . After eliminating the patients who did not meet the inclusion criteria, 375 patients of HCC were managed with curative intent. Of these 375 patients, 247 patients (baseline cohort) of intermediate‐stage HCC (HKLC stage IIB) were included in the study. The demographic details and tumor characteristics of the patients are shown in Table . In the overall population, the distribution of cirrhosis, portal hypertension, and viral markers was significantly different [Table ]. After 1:1 propensity matching, out of 154 patients, 77 each in upfront surgery (UPS) and post‐TACE (pTACE) groups were selected for analysis [Table ]. Among the 247 patients of the baseline cohort, 138 underwent UPS and 109 received pTACE [Figure ]. After 1:1 propensity matching, there were 77 patients in each group as shown in Figure . Of the 77 patients in the UPS group, 75 underwent successful curative resection and two were declared inoperable. Among the 77 patients in the pTACE group, the dropout rate was 35% (27/77), with multicentric disease ( n = 8 and 29.6%), being the most common reason and 48 patients ultimately underwent successful curative resection, since two patients were deemed inoperable due to the presence of bilobar disease on exploration. The median duration between the last TACE session and surgery was 74 days (range 14–244). The median number of TACE cycles given was 1 (range 1–4). Twenty‐two patients received more than one cycle of TACE. 3.1 Perioperative outcomes Surgical outcomes of UPS and pTACE groups are elaborated in the Supplementary file [Table 1 in Supporting Information ]. The complication rates in terms of posthepatectomy liver failure (PHLF), posthepatectomy bile leak (PHBL), and posthepatectomy hemorrhage (PHH) were not significantly different. 3.2 Overall survival (OS) In the baseline cohort of 247 patients, the median follow‐up was 38.43 months (0.46–144.24). The median OS of the UPS group was 40.4 months (95% CI, 29.57–51.24) as compared to 36.9 months (95% CI, 22.68–51.16) in the pTACE group ( p value = 0.448) on an intention‐to‐treat analysis [Figure ]. In the propensity matched population ( n = 154), the median follow‐up was 36.4 months (0.46–144.26). The median overall survival of the UPS group and the pTACE group were 30.06 months (95% CI, 13.526–46.597) and 39.26 months (95% CI, 16.74–61.78), respectively ( p value = 0.77). [Figure ]. In the same propensity matched population ( n = 154), on analysis of patients who underwent curative resection, the median overall survival were 30.68 months (95% CI, 14.5–46.8) in the UPS group versus 90.97 months in the pTACE group, respectively ( p value = 0.006). [Figure ]. Multivariate Cox regression analysis of factors affecting OS in the population who underwent successful curative resection, revealed cirrhosis ( p value = 0.005), lymphovascular invasion (LVI) ( p value = 0.035), and TACE ( p value = 0.007) as significant factors affecting OS [Table ]. 3.3 Disease‐free survival (DFS) In the baseline cohort of 247 patients, the median DFS of the UPS group was 18.26 months (95% CI, 8.52–28.00) as compared to 13.3 months (95% CI, 5.45–21.15) in the pTACE group ( p value = 0.663) on an intention‐to‐treat analysis. In the propensity matched population ( n = 154), the median DFS of the UPS group and the pTACE group was 13.56 months (95% CI, 7.77–19.36) and 13.76 months (95% CI, 5.38–22.15), respectively ( p value = 0.77). Analysis of patients who underwent curative resection showed a median DFS of 13.56 months (95% CI, 4.98–22.15) for the UPS group versus 44.02 months in the pTACE group, respectively ( p value = 0.013). [Figure ]. Multivariate Cox regression analysis of factors affecting DFS in the population who underwent successful curative resection, revealed TBS ( p value = 0.005), cirrhosis ( p value = 0.010), capsular invasion ( p value = 0.018), and TACE ( p value = 0.022) as significant factors affecting DFS [Table ]. There was no difference in the recurrence and death patterns among the groups [Supplementary file, Table 2 in Supporting Information ]. Perioperative outcomes Surgical outcomes of UPS and pTACE groups are elaborated in the Supplementary file [Table 1 in Supporting Information ]. The complication rates in terms of posthepatectomy liver failure (PHLF), posthepatectomy bile leak (PHBL), and posthepatectomy hemorrhage (PHH) were not significantly different. Overall survival (OS) In the baseline cohort of 247 patients, the median follow‐up was 38.43 months (0.46–144.24). The median OS of the UPS group was 40.4 months (95% CI, 29.57–51.24) as compared to 36.9 months (95% CI, 22.68–51.16) in the pTACE group ( p value = 0.448) on an intention‐to‐treat analysis [Figure ]. In the propensity matched population ( n = 154), the median follow‐up was 36.4 months (0.46–144.26). The median overall survival of the UPS group and the pTACE group were 30.06 months (95% CI, 13.526–46.597) and 39.26 months (95% CI, 16.74–61.78), respectively ( p value = 0.77). [Figure ]. In the same propensity matched population ( n = 154), on analysis of patients who underwent curative resection, the median overall survival were 30.68 months (95% CI, 14.5–46.8) in the UPS group versus 90.97 months in the pTACE group, respectively ( p value = 0.006). [Figure ]. Multivariate Cox regression analysis of factors affecting OS in the population who underwent successful curative resection, revealed cirrhosis ( p value = 0.005), lymphovascular invasion (LVI) ( p value = 0.035), and TACE ( p value = 0.007) as significant factors affecting OS [Table ]. Disease‐free survival (DFS) In the baseline cohort of 247 patients, the median DFS of the UPS group was 18.26 months (95% CI, 8.52–28.00) as compared to 13.3 months (95% CI, 5.45–21.15) in the pTACE group ( p value = 0.663) on an intention‐to‐treat analysis. In the propensity matched population ( n = 154), the median DFS of the UPS group and the pTACE group was 13.56 months (95% CI, 7.77–19.36) and 13.76 months (95% CI, 5.38–22.15), respectively ( p value = 0.77). Analysis of patients who underwent curative resection showed a median DFS of 13.56 months (95% CI, 4.98–22.15) for the UPS group versus 44.02 months in the pTACE group, respectively ( p value = 0.013). [Figure ]. Multivariate Cox regression analysis of factors affecting DFS in the population who underwent successful curative resection, revealed TBS ( p value = 0.005), cirrhosis ( p value = 0.010), capsular invasion ( p value = 0.018), and TACE ( p value = 0.022) as significant factors affecting DFS [Table ]. There was no difference in the recurrence and death patterns among the groups [Supplementary file, Table 2 in Supporting Information ]. DISCUSSION Surgery (liver resection or transplantation) remains the best curative treatment option for HCC. , Even successful surgical resections are associated with high rates of intrahepatic recurrences ranging from 50% to 75%. These intrahepatic recurrences can be early or late. Early recurrences are the true recurrences of intrahepatic metastases that strongly correlate with tumor characteristics. In contrast, late recurrences tend to be multicentric in origin, which may be related to the condition of the remnant liver. Gao et al. have attributed early recurrences to either preexisting microscopic tumor foci or due to tumor dissemination during surgical manipulation. Transarterial treatment in the form of TACE has been hypothesized to reduce the early true recurrences due to intrahepatic metastases and prolong survival, whereas others have failed to demonstrate these outcomes. , , , , Direct infusion of a lipoidal agent and chemotherapy through the hepatic artery allows a high dose of chemotherapy to be delivered directly to the tumor. Transarterial therapies, such as TACE, have been used for unresectable/locally advanced HCC for tumor downsizing and rendering them amenable to surgical resection. Improved long‐term survival may be achieved in HCC patients who undergo surgical resection after downsizing. , , There is limited evidence of their utility as neoadjuvant treatment in resectable disease. , Preoperative TACE can detect micrometastases that are associated with large HCCs. It also enhances the ability to detect additional small nodules on a CT scan performed 2–3 weeks later, especially in the opposite lobe of cirrhotic livers. Kairobi et al. concluded that preoperative TACE did not reduce recurrences (local and distant) or improve survival in resectable HCCs. Zhou et al. conducted a randomized control trial comparing preoperative TACE versus upfront resection and concluded that preoperative TACE was not beneficial in improving survival (DFS and OS) in resectable HCCs. However, in the present study, majority of patients who received TACE were for downsizing [Figure ]. Patients who underwent successful curative resection, in the pTACE group, had improved survival that was statistically significant (90.97 vs. 30.68 months with p value = 0.006). A pathological complete response was observed in four patients and more than 50% necrosis was seen in 26 patients in post‐TACE resected specimens [Supplementary file, Table 1 in Supporting Information ]. This marked pathological response seen in 62.5% (30/48) of patients in the pTACE group has likely contributed to the improved survival in resected patients. Another key finding was the lower incidence of LVI in the pTACE group (22.9%) as compared to the UPS group (45.3%), which could be attributed to the effect of treatment. A similar finding was reported by Wang et al., where the microvascular invasion was lower in the TACE + liver resection group. Preoperative TACE induces massive necrosis that markedly reduces the amount of microvascular invasion in the tumor. , Increased incidence of microvascular invasion is often seen in large HCC and is a known poor prognostic factor. , One of the major concerns with preoperative TACE is the risk of progression and potential dropouts. Zhou et al. reported a dropout rate of 5% in their group because of liver decompensation or disease progression. They concluded that these patients had missed the chance of curative resection and cited it as a disadvantage of pTACE. However, it can be argued that patients who suffer liver decompensation post‐TACE have poor functional reserve and are unlikely to tolerate a major hepatectomy thus averting a futile surgery. The dropout rate in the group of patients who received pTACE was 35% in our study with common reasons being liver decompensation, multicentricity, and disease progression [Figure ]. Also, in the pTACE group, 30% had underlying cirrhosis and up to 10% had features of portal hypertension [Table ]. Therefore, TACE acted as a preoperative stress test for such patients and thereby helped in the patient selection. Patients who develop progressive disease with a liver‐directed therapy probably have a disease with an inherently aggressive biology and thus would be poor surgical candidates, thereby emphasizing the role of TACE in patient selection. Some studies have reported that pTACE can make surgery technically difficult because of intraoperative bleeding due to hepatic inflammation, diaphragmatic adhesions, and adhesions with surrounding structures such as the stomach. , In the present study, the median duration between TACE and surgery was 74 days (14–244). Nagasue et al. reported that the mean interval between TACE and surgery of 130 days resulted in similar complication rates as in patients who did not receive TACE. However, in the present study surgical outcomes in terms of PHLF, PHBL, PHH, and Clavien–Dindo scores were not different between the two groups. This study brings out a fallacy of the HKLC staging system. As per HKLC staging recommendations, all IIB‐stage patients should undergo surgical resection. However, it does not provide clarification on the resectability criteria, for example, large tumors with inadequate FLR and patients with comorbidities requiring optimization before surgery. In the present study, we have included these patients under the subcategory of borderline resectable diseases. These patients need downsizing procedures, such as TACE with or without PVE, to allow augmentation of FLR. The limitation of this study is its retrospective nature, which is associated with its inherent bias. A propensity matched intention‐to‐treat analysis was performed to reduce that bias. However, though propensity matching was used, unadjusted confounding may still exist as its retrospective data spread over a decade, wherein multiple factors might have influenced treatment decision‐making. CONCLUSION In intermediate‐stage hepatocellular carcinoma (Hong Kong Liver Cancer stage IIB), pTACE can be used to better select patients with borderline resectability. Survival was significantly improved in patients who received pTACE and were able to undergo surgical resection. Thus, it is important to subclassify the intermediate‐stage HCC who would benefit from pTACE and develop strategies to reduce the dropout rates. Kunal Nandy : Conceptualization; Data curation; Formal analysis; Writing ‐ original draft. Gurudutt P. Varty : Data curation; Formal analysis. Shraddha Patkar : Conceptualization; Writing ‐ original draft; Writing ‐ review and editing. Tanvi Shah : Data curation; Formal analysis. Kaival Gundavda : Data curation; Formal analysis; Writing ‐ review and editing. Kunal Gala : Writing ‐ review and editing. Nitin Shetty : Methodology; Writing ‐ review and editing. Suyash Kulkarni : Methodology; Writing ‐ review and editing. Mahesh Goel : Conceptualization; Writing ‐ review and editing. The authors declare that they have no relevant financial or nonfinancial interests to disclose. This is an observational study; hence, no ethical approval is required. Supporting Information 1
Tumor Area Positivity (TAP) score of programmed death-ligand 1 (PD-L1): a novel visual estimation method for combined tumor cell and immune cell scoring
666777f8-de3e-4135-871f-7cd09daaa6e1
10114344
Anatomy[mh]
The discovery of immune checkpoints has led to a paradigm shift toward immunotherapy treatment in cancer. One such checkpoint is the programmed cell death protein 1 (PD-1)/programmed death-ligand 1 (PD-L1) axis which is responsible for inhibiting an immune response of immune cells (IC) to foreign antigens . Tumor cells (TC) can also express PD-L1, leading to activation of the PD-1/PD-L1 pathway, which subsequently allows TC to evade the immune response and results in tumor growth . Increased PD-L1 expression in tissue from patients with cancer is positively correlated with clinical response to immunotherapy ; this highlights the need for scoring methods to accurately quantify PD-L1 protein expression. Optimal scoring methods should be accurate, precise, and help simplify workflow for practicing pathologists. Currently, United States Food and Drug Administration (FDA)-approved PD-L1 immunohistochemistry (IHC) assays/algorithms include scoring methods that consider TC positivity and/or IC positivity (Table ) . Combined Positive Score (CPS) is the only FDA-approved method that combines TC and IC; however, it is an approach based on cell counting which is time consuming and not intuitive to practicing pathologists. In this study, we introduce the Tumor Area Positivity (TAP) score, a simple, visual-based method for scoring TC and IC together which addresses the limitations of a cell-counting approach with comparable efficacy and reproducibility. Institutional review board approval was obtained by the Roche Tissue Diagnostics Clinical Operation Department. The two reader precision studies used commercial samples. For the samples used in the comparison study, which were collected as part of a BeiGene study, consent was obtained in compliance with requirements. Each pathologist received training on the TAP scoring algorithm: [12pt]{minimal} $$=}{}$$ TAP = % PD-L1 positive TC and IC Tumor area Pathologists were then required to pass a series of tests before participation in the studies (see section). Samples from gastric adenocarcinoma, gastroesophageal junction (GEJ) adenocarcinoma and esophageal squamous cell carcinoma (ESCC) (including both resections and biopsies) were stained using the VENTANA PD-L1 (SP263) assay (Ventana Medical Systems, Inc., Tucson, AZ, USA). Between- and within-reader precision studies were performed for the TAP score among three internal (Roche Tissue Diagnostics) pathologists (internal study) and six pathologists from three external organizations (external study). After successful completion of the reader precision studies, TAP score was compared to CPS retrospectively for concordance and time efficacy. TAP scoring method description and approach Identification of tumor area To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Determination of tumor-associated IC Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. Determination of TAP score The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. Pathologist training The training included review of an interpretation guide via Microsoft PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation, and review of a set of training glass slides using multi-headed microscopes in conjunction with the training pathologist. During the training session, PD-L1 biology, staining characteristics of TC and IC (Fig. ), and acceptability of system level controls were reviewed, among other topics. For gastric/GEJ adenocarcinoma, the test and training sets were designed to train the pathologists to accurately score PD-L1 expression status around the 5% cutoff (Fig. ). The tests included a self-study set of 10 cases with consensus scores, a mini-test of 10 cases, and a final test of 60 cases. To pass the final test, the trainee pathologist had to achieve 85% agreement with reference scores on either an initial or a repeat test. The training on ESCC scoring was conducted using different training and test sets. Internal reader precision study Three internal pathologists were trained and qualified for this study. This study evaluated: i) between-reader precision: across qualified readers individually evaluating the same set of randomized gastric or GEJ adenocarcinoma samples ( N = 100 with equal distribution of PD-L1 expression level for positive [ n = 50] and negative [ n = 50] samples, spanning the range of the TAP score); and ii) within-reader precision: within individual readers evaluating the same set of gastric or GEJ adenocarcinoma samples over two assessments, separated by a wash-out time period of at least 2 weeks, and re-randomized and blinded prior to the second read. Between- and within-reader precision were assessed by evaluating the concordance of PD-L1 expression level of samples among the three readers from their first round of reads and within individual readers from their first and second round of reads, respectively. In the between-reader precision analysis, there were three pair-wise comparisons for each sample (reader 1 vs. reader 2, reader 1 vs. reader 3, and reader 2 vs. reader 3). With N = 100 samples, there were a total of 300 pair-wise comparisons. In the within-reader precision analysis, with N = 100 samples, there were 100 comparisons between the two reading rounds for each reader. All samples were commercially obtained formalin-fixed paraffin-embedded specimens. A cutoff of 5%, using the TAP score, was used to determine if the PD-L1 expression in the sample was considered positive or negative. The sample set included 90% resection samples and 10% biopsy samples, 10% of which showed borderline range of PD-L1 expression. A sample was considered negative borderline if the TAP score was 2–4%, and positive borderline if TAP score was 5–9%. The average positive agreement (APA), average negative agreement (ANA), and overall percent agreement (OPA) between and within readers were then calculated, along with 95% confidence intervals (CIs). The acceptance criterion for between-reader precision was ≥85% ANA and APA. The acceptance criteria for within-reader precision were ≥ 90% OPA, and ≥ 85% ANA and APA. The assay was required to produce acceptable levels of non-specific staining on BenchMark ULTRA instruments (Ventana Medical Systems Inc.) in at least 90% of samples. External reader precision study Three external organizations participated in an inter-laboratory reproducibility study using a cutoff of 5% TAP. At each site, two trained and qualified pathologists were selected to score the slides originating from the same sets of blocks. Specifically, 28 commercially obtained gastric or GEJ adenocarcinoma formalin-fixed paraffin-embedded specimens spanning the range of the TAP score were used in the external study. There was an equal distribution of PD-L1 expression level for positive ( n = 14) and negative ( n = 14) samples using the TAP score at 5% cutoff. Ten percent biopsy samples and 10% borderline cases were included in the sample set. The 28 cases were stained on five non-consecutive days over a period of at least 20 days at three sites, generating a total of five sets of slides for evaluation by the two pathologists at each site. The APA, ANA, and OPA were calculated across the three sites. Comparison of TAP and CPS Gastric or GEJ adenocarcinoma and ESCC samples ( n = 52) from a BGB-A317 trial carried out by BeiGene (Beijing, China) were used to compare the TAP and CPS scoring algorithms for evaluation of PD-L1 expression in a retrospective manner. Of the 52 samples, n = 10 were resection samples and n = 42 were biopsies. All samples were stained with the VENTANA PD-L1 (SP263) assay. The samples were distributed among eight internal pathologists and were scored using both methods. All eight pathologists were trained and qualified to evaluate PD-L1 expression using both the TAP and CPS scoring algorithms. The concordance of the TAP score at a 1% and 5% cutoff was assessed against a CPS score of 1 (equivalent to 1%), the FDA-approved cutoff for gastric or GEJ adenocarcinoma. The time spent on scoring for each method was also assessed. Identification of tumor area To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Determination of tumor-associated IC Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. Determination of TAP score The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. The training included review of an interpretation guide via Microsoft PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation, and review of a set of training glass slides using multi-headed microscopes in conjunction with the training pathologist. During the training session, PD-L1 biology, staining characteristics of TC and IC (Fig. ), and acceptability of system level controls were reviewed, among other topics. For gastric/GEJ adenocarcinoma, the test and training sets were designed to train the pathologists to accurately score PD-L1 expression status around the 5% cutoff (Fig. ). The tests included a self-study set of 10 cases with consensus scores, a mini-test of 10 cases, and a final test of 60 cases. To pass the final test, the trainee pathologist had to achieve 85% agreement with reference scores on either an initial or a repeat test. The training on ESCC scoring was conducted using different training and test sets. Three internal pathologists were trained and qualified for this study. This study evaluated: i) between-reader precision: across qualified readers individually evaluating the same set of randomized gastric or GEJ adenocarcinoma samples ( N = 100 with equal distribution of PD-L1 expression level for positive [ n = 50] and negative [ n = 50] samples, spanning the range of the TAP score); and ii) within-reader precision: within individual readers evaluating the same set of gastric or GEJ adenocarcinoma samples over two assessments, separated by a wash-out time period of at least 2 weeks, and re-randomized and blinded prior to the second read. Between- and within-reader precision were assessed by evaluating the concordance of PD-L1 expression level of samples among the three readers from their first round of reads and within individual readers from their first and second round of reads, respectively. In the between-reader precision analysis, there were three pair-wise comparisons for each sample (reader 1 vs. reader 2, reader 1 vs. reader 3, and reader 2 vs. reader 3). With N = 100 samples, there were a total of 300 pair-wise comparisons. In the within-reader precision analysis, with N = 100 samples, there were 100 comparisons between the two reading rounds for each reader. All samples were commercially obtained formalin-fixed paraffin-embedded specimens. A cutoff of 5%, using the TAP score, was used to determine if the PD-L1 expression in the sample was considered positive or negative. The sample set included 90% resection samples and 10% biopsy samples, 10% of which showed borderline range of PD-L1 expression. A sample was considered negative borderline if the TAP score was 2–4%, and positive borderline if TAP score was 5–9%. The average positive agreement (APA), average negative agreement (ANA), and overall percent agreement (OPA) between and within readers were then calculated, along with 95% confidence intervals (CIs). The acceptance criterion for between-reader precision was ≥85% ANA and APA. The acceptance criteria for within-reader precision were ≥ 90% OPA, and ≥ 85% ANA and APA. The assay was required to produce acceptable levels of non-specific staining on BenchMark ULTRA instruments (Ventana Medical Systems Inc.) in at least 90% of samples. Three external organizations participated in an inter-laboratory reproducibility study using a cutoff of 5% TAP. At each site, two trained and qualified pathologists were selected to score the slides originating from the same sets of blocks. Specifically, 28 commercially obtained gastric or GEJ adenocarcinoma formalin-fixed paraffin-embedded specimens spanning the range of the TAP score were used in the external study. There was an equal distribution of PD-L1 expression level for positive ( n = 14) and negative ( n = 14) samples using the TAP score at 5% cutoff. Ten percent biopsy samples and 10% borderline cases were included in the sample set. The 28 cases were stained on five non-consecutive days over a period of at least 20 days at three sites, generating a total of five sets of slides for evaluation by the two pathologists at each site. The APA, ANA, and OPA were calculated across the three sites. Gastric or GEJ adenocarcinoma and ESCC samples ( n = 52) from a BGB-A317 trial carried out by BeiGene (Beijing, China) were used to compare the TAP and CPS scoring algorithms for evaluation of PD-L1 expression in a retrospective manner. Of the 52 samples, n = 10 were resection samples and n = 42 were biopsies. All samples were stained with the VENTANA PD-L1 (SP263) assay. The samples were distributed among eight internal pathologists and were scored using both methods. All eight pathologists were trained and qualified to evaluate PD-L1 expression using both the TAP and CPS scoring algorithms. The concordance of the TAP score at a 1% and 5% cutoff was assessed against a CPS score of 1 (equivalent to 1%), the FDA-approved cutoff for gastric or GEJ adenocarcinoma. The time spent on scoring for each method was also assessed. Internal reader precision study As shown in Table , for between-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/298 [99.3%]; 95% CI, 98.0–100.0), ANA (300/302 [99.3%]; 95% CI, 98.0–100.0), and OPA (298/300 [99.3%]; 95% CI, 98.0–100.0). For within-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/299 [99.0%]; 95% CI, 98.0–100.0), ANA (298/301 [99.0%]; 95% CI, 98.0–100.0), and OPA (297/300 [99.0%]; 95% CI, 98.0–100.0). The background acceptability rate (600/600 [100.0%]; 95% CI, 99.4–100.0) also met the pre-defined acceptance criteria. External reader precision study Table shows that site A achieved the lowest agreement rates for APA (88/109 [80.7%], 95% CI, 63.6–93.5), ANA (144/165 [87.3%], 95% CI, 78.0–95.7), and OPA (116/137 [84.7%], 95% CI, 73.2–94.9), while sites B and C produced identical results for APA (140/140 [100.0%], 95% CI, 97.3–100.0), ANA (140/140 [100.0%], 95% CI, 97.3–100.0), and OPA (140/140 [100.0%], 95% CI, 97.3–100.0). Overall, high agreement levels were demonstrated across the three sites (APA, 368/389 [94.6%], 95% CI, 90.8–98.0; ANA, 424/445 [95.3%], 95% CI, 91.5–98.5; OPA, 396/417 [95.0%], 95% CI, 91.2–98.3). Correlation of TAP and CPS The percentage agreement between TAP (1% cutoff) vs CPS (cutoff of 1) was 39/39 samples (100%; 95% CI, 91.0–100.0) for positive percent agreement (PPA), 11/13 samples (84.6%; 95% CI, 57.8–95.7) for negative percent agreement (NPA), and 50/52 samples (96.2%; 95% CI, 87.0–98.9) for OPA (Table ). For TAP (5% cutoff) vs CPS (cutoff of 1), the percentage agreement was 35/39 samples (89.7%; 95% CI, 76.4–95.9) for PPA, 13/13 samples (100%; 95% CI, 77.2–100.0) for NPA, and 48/52 samples (92.3%; 95% CI, 81.8–97.0) for OPA (Table ). The average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm. As shown in Table , for between-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/298 [99.3%]; 95% CI, 98.0–100.0), ANA (300/302 [99.3%]; 95% CI, 98.0–100.0), and OPA (298/300 [99.3%]; 95% CI, 98.0–100.0). For within-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/299 [99.0%]; 95% CI, 98.0–100.0), ANA (298/301 [99.0%]; 95% CI, 98.0–100.0), and OPA (297/300 [99.0%]; 95% CI, 98.0–100.0). The background acceptability rate (600/600 [100.0%]; 95% CI, 99.4–100.0) also met the pre-defined acceptance criteria. Table shows that site A achieved the lowest agreement rates for APA (88/109 [80.7%], 95% CI, 63.6–93.5), ANA (144/165 [87.3%], 95% CI, 78.0–95.7), and OPA (116/137 [84.7%], 95% CI, 73.2–94.9), while sites B and C produced identical results for APA (140/140 [100.0%], 95% CI, 97.3–100.0), ANA (140/140 [100.0%], 95% CI, 97.3–100.0), and OPA (140/140 [100.0%], 95% CI, 97.3–100.0). Overall, high agreement levels were demonstrated across the three sites (APA, 368/389 [94.6%], 95% CI, 90.8–98.0; ANA, 424/445 [95.3%], 95% CI, 91.5–98.5; OPA, 396/417 [95.0%], 95% CI, 91.2–98.3). The percentage agreement between TAP (1% cutoff) vs CPS (cutoff of 1) was 39/39 samples (100%; 95% CI, 91.0–100.0) for positive percent agreement (PPA), 11/13 samples (84.6%; 95% CI, 57.8–95.7) for negative percent agreement (NPA), and 50/52 samples (96.2%; 95% CI, 87.0–98.9) for OPA (Table ). For TAP (5% cutoff) vs CPS (cutoff of 1), the percentage agreement was 35/39 samples (89.7%; 95% CI, 76.4–95.9) for PPA, 13/13 samples (100%; 95% CI, 77.2–100.0) for NPA, and 48/52 samples (92.3%; 95% CI, 81.8–97.0) for OPA (Table ). The average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm. Understanding of immune checkpoint inhibitors has revolutionized the treatment options for cancer patients. Thus far, PD-L1 has been the focus of that recent paradigm shift. However, different scoring systems were introduced in a rapid successive fashion which may have burdened practicing pathologists who had to consistently play catch-up. This study aimed to provide a simple, visual-based estimate scoring method which combines TC and IC to identify the intended patient population of interest. On-market FDA-approved PD-L1 scoring algorithms can be classified into TC- or IC-only score, TC and IC score in a sequential manner, or combined TC/IC score (Table ). In general, TC-only scoring methods have been favorably adopted by the pathology community , whereas IC scoring or sequential TC/IC scoring have been perceived as challenging. CPS is the only FDA-approved method that combines TC and IC. It is a cell counting-based approach where the number of PD-L1-stained cells (TC, lymphocytes, and macrophages) is divided by the total number of viable TC, multiplied by 100 . Cell counting can be time-consuming and is not in sync with pathology practice, which classically uses a Gestalt approach based on visual pattern recognition and estimation. Our study found that the average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm, with one case of a large resection taking up to 1 h using CPS. Accordingly, pathologists must develop strategies to cope with CPS scoring during busy practice periods due to the time-consuming nature of the cell counting process. From communicating with practicing pathologists in the field, these strategies include piecemeal scoring approaches for large tumor resection specimens with heterogeneous staining pattern, eyeballing when applying 20x rules which provide estimated tumor cell numbers, and using a standard cellularity table for TC numbers. An added complexity of CPS scoring is assessment of the type of IC to be included in the count, which requires the pathologist to select only mononuclear IC . The TAP scoring method is inclusive of all types of IC; therefore, pathologists need not exhaust themselves under high magnification to confirm a cell type. Increasingly, research has shown that granulocytes are part of the adaptive tumor immune response ; we have also observed weak to moderate PD-L1 expression in neutrophils around TC (Supplementary Fig. ). This evidence led to inclusion of granulocytes in development of the TAP method. To overly simplify, the TAP method is essentially “the percentage of relevant brown (positive cells) over blue (entire tumor areas on IHC slide)”. In this study, we compared the percentage agreement between TAP (1% and 5% cutoff) and CPS (cutoff of 1) in gastric/GEJ adenocarcinoma and ESCC samples using the VENTANA PD-L1 (SP263) assay, to investigate whether the two scoring methods were interchangeable, and if so, at what cutoff. The PPA, NPA, and OPA of the two comparisons were equal to or greater than 85%, with TAP score at 1% cutoff having better concordance with CPS 1 compared with TAP score at 5%. This suggests that the two algorithms, when used at different cutoffs, could potentially identify the same population of patients. In theory, samples in which the tumor stroma does not comprise large portions of tumor areas, such as mucosal biopsy specimens, have even greater potential for higher concordance of the two scoring methods (TAP and CPS). In fact, a study evaluated associations and potential correlations with clinical efficacy of the PD-L1 SP263 assay scored with the TAP algorithm (referred to as TIC [Tumor and Immune Cell]) at 5% cutoff and the PD-L1 22C3 assay scored with the CPS algorithm at 1% cutoff in gastroesophageal adenocarcinoma. Both the SP263 assay (TAP scoring) and 22C3 assay (CPS scoring) aided in the identification of patients with gastroesophageal adenocarcinoma likely to benefit from tislelizumab . A potential limitation of TAP scoring is in defining the tumor areas in situations where the specimens have complicated histology with various non-neoplastic cells present in between tumor cells. However, this becomes less problematic as a pathologist reviews more cases and gains more experience. The introduction of another PD-L1 scoring method (TAP) to an already confused market could be perceived as a limitation. However, as we have demonstrated, this method can help reduce confusion by providing a viable path for simplifying and standardizing pathology practice without compromising accuracy of patient selection. The data in this study show that the TAP scoring method is as effective as the CPS method in detecting patients with positive PD-L1 expression, but substantially less time-consuming. In addition to being highly reproducible among different pathologists, it can potentially standardize the existing scoring methods that evaluate both TC and IC. Additional file 1: Supplementary Fig. 1. Neutrophils with weak cytoplasmic staining.
A case report of gastric antral vascular ectasia treated by endoscopic band ligation combined with lauromacrogol injection
99ef3c93-55fa-4ef5-a95d-43a0abd8e804
11771729
Surgical Procedures, Operative[mh]
Gastric antral vascular ectasia (GAVE), a rare etiology of upper gastrointestinal bleeding, contributes to approximately 4% of non-variceal upper gastrointestinal hemorrhages. Endoscopic intervention is the first-line treatment option for GAVE, with argon plasma coagulation (APC) previously being the predominant modality. Recent studies have demonstrated the superiority of endoscopic band ligation (EBL) on APC for GAVE management. However, literature on combined endoscopic approaches for GAVE remains scarce. In this article, we describe a case of a patient with GAVE-induced recurrent anemia who initially failed to achieve optimal results with EBL monotherapy. Subsequent treatment combining EBL with lauromacrogol injection led to a satisfactory outcome. The patient currently exhibits no signs of active gastrointestinal bleeding, maintains normal hemoglobin levels, and a follow-up esophagogastroduodenoscopy (EGD) reveals substantial reduction in the antral lesions. 2.1. Chief complaints A 74-year-old female patient presented with persistent chest tightness and dyspnea persisting for 1 year. 2.2. History of present illness The patient had persistent chest tightness and dyspnea for 1 year without special treatment. She reported no abdominal pain, hematemesis, melena, or hematochezia. 2.3. History of past illness She had the history of well-controlled hypertension and hyperlipidemia. She denied any hepatic diseases or autoimmune disorders. 2.4. Personal and family history The patient never smoked or drank. There was no family history of malignant tumors. 2.5. Physical examination Physical examination showed anemic appearance and sinus tachycardia with normal blood pressure. Abdominal palpation detected no masses. No palpable enlargement of the liver and spleen. Rectal examination did not reveal blood in stools. Body mass index was 30.4 kg/m 2 . 2.6. Laboratory examinations Laboratory data evidenced that the patient presented with microcytic hypochromic anemia, as indicated by a hemoglobin level of 5.7 g/dL (reference range 11.5–15.5 g/dL). Following a blood transfusion, her hemoglobin increased to 8.2 g/dL. Iron studies demonstrated a ferritin level of 7.54 ng/mL (reference range 13–318 ng/mL), serum iron of 1.9 μmol/L (reference range 7.8–32.2 μmol/L) and a transferrin saturation of 1.7% (reference range 20%–55%). The reticulocyte count was elevated at 2.5% (reference range 0.5%–1.5%). Bone marrow examination disclosed no aberrant cell populations, aligning with the diagnosis of microcytic hypochromic anemia. Further laboratory evaluations showed an antinuclear antibody titer of 1:1000 and an elevated gastrin level of 769 ng/L (reference range 13–115 ng/L). The patient had multiple positive fecal occult blood tests (FOBT). Other significant laboratory test results, including alanine aminotransferase, aspartate aminotransferase, platelet counts, and coagulation function, fall within the normal range. 2.7. Imaging examinations Abdominal computed tomography scan showed that the patient had a fatty liver. Cardiac ultrasound and chest computed tomography scan were normal. 2.8. Further diagnostic work-up During a subsequent EGD accompanied by histological biopsy, radial erythema was observed in the gastric antrum, with some areas showing nodular elevations (Fig. A). Hematoxylin and eosin staining revealed the presence of dilated capillaries (Fig. B), whereas immunohistochemical analysis indicated positive staining for CD31 (Fig. C) and CD34 (Fig. D). Colonoscopy examination did not show any obvious lesions. 2.9. Final diagnosis The endoscopic observations and histological evidence led to a diagnosis of GAVE for the patient. 2.10. Treatment Considering the poor response to prior pharmacotherapy, we selected endoscopic intervention for this patient. Given the patient’s high expectations for treatment and the preference to reduce the risk of recurrence, we opted for EBL based on recent research findings. We utilized a multi-band ligator equipped with 10 bands to treat the radial erythematous stripes emanating from the distal gastric antrum, covering the majority of the lesion (Fig. ). The procedure was uneventful with no intraoperative or postoperative complications, and the patient did not complain of any significant discomfort. Discharge occurred 48 hours post-procedure, with ongoing administration of omeprazole and rebamipide. Hematology assessment at 6 weeks after discharge revealed an increase in hemoglobin levels to 11.3 g/dL, but the FOBT still positive. Subsequent endoscopy indicated a reduction in lesions within the gastric antrum compared to the initial presentation 6 weeks earlier. However, persistent mucosal erythema and nodularity suggested a suboptimal therapeutic response. Consequently, we innovatively performed a combination of EBL with lauromacrogol injection as a treatment strategy for the patient. During the operation, we reapplied EBL to the areas with radial stripes and administered lauromacrogol injections to the more severe parts of the lesions (Fig. ). This second intervention mirrored the first in its lack of complications and the absence of discomfort for the patient, who was again discharged 48 hours post-procedure. 2.11. Outcome and follow-up The patient currently has no symptoms of chest tightness or dyspnea. At the 6-week follow-up, the patient exhibited a negative FOBT, normalization of hemoglobin levels to 12.9 g/dL, and endoscopic images demonstrated near complete resolution of vascular ectasias (Fig. ). A 74-year-old female patient presented with persistent chest tightness and dyspnea persisting for 1 year. The patient had persistent chest tightness and dyspnea for 1 year without special treatment. She reported no abdominal pain, hematemesis, melena, or hematochezia. She had the history of well-controlled hypertension and hyperlipidemia. She denied any hepatic diseases or autoimmune disorders. The patient never smoked or drank. There was no family history of malignant tumors. Physical examination showed anemic appearance and sinus tachycardia with normal blood pressure. Abdominal palpation detected no masses. No palpable enlargement of the liver and spleen. Rectal examination did not reveal blood in stools. Body mass index was 30.4 kg/m 2 . Laboratory data evidenced that the patient presented with microcytic hypochromic anemia, as indicated by a hemoglobin level of 5.7 g/dL (reference range 11.5–15.5 g/dL). Following a blood transfusion, her hemoglobin increased to 8.2 g/dL. Iron studies demonstrated a ferritin level of 7.54 ng/mL (reference range 13–318 ng/mL), serum iron of 1.9 μmol/L (reference range 7.8–32.2 μmol/L) and a transferrin saturation of 1.7% (reference range 20%–55%). The reticulocyte count was elevated at 2.5% (reference range 0.5%–1.5%). Bone marrow examination disclosed no aberrant cell populations, aligning with the diagnosis of microcytic hypochromic anemia. Further laboratory evaluations showed an antinuclear antibody titer of 1:1000 and an elevated gastrin level of 769 ng/L (reference range 13–115 ng/L). The patient had multiple positive fecal occult blood tests (FOBT). Other significant laboratory test results, including alanine aminotransferase, aspartate aminotransferase, platelet counts, and coagulation function, fall within the normal range. Abdominal computed tomography scan showed that the patient had a fatty liver. Cardiac ultrasound and chest computed tomography scan were normal. During a subsequent EGD accompanied by histological biopsy, radial erythema was observed in the gastric antrum, with some areas showing nodular elevations (Fig. A). Hematoxylin and eosin staining revealed the presence of dilated capillaries (Fig. B), whereas immunohistochemical analysis indicated positive staining for CD31 (Fig. C) and CD34 (Fig. D). Colonoscopy examination did not show any obvious lesions. The endoscopic observations and histological evidence led to a diagnosis of GAVE for the patient. Considering the poor response to prior pharmacotherapy, we selected endoscopic intervention for this patient. Given the patient’s high expectations for treatment and the preference to reduce the risk of recurrence, we opted for EBL based on recent research findings. We utilized a multi-band ligator equipped with 10 bands to treat the radial erythematous stripes emanating from the distal gastric antrum, covering the majority of the lesion (Fig. ). The procedure was uneventful with no intraoperative or postoperative complications, and the patient did not complain of any significant discomfort. Discharge occurred 48 hours post-procedure, with ongoing administration of omeprazole and rebamipide. Hematology assessment at 6 weeks after discharge revealed an increase in hemoglobin levels to 11.3 g/dL, but the FOBT still positive. Subsequent endoscopy indicated a reduction in lesions within the gastric antrum compared to the initial presentation 6 weeks earlier. However, persistent mucosal erythema and nodularity suggested a suboptimal therapeutic response. Consequently, we innovatively performed a combination of EBL with lauromacrogol injection as a treatment strategy for the patient. During the operation, we reapplied EBL to the areas with radial stripes and administered lauromacrogol injections to the more severe parts of the lesions (Fig. ). This second intervention mirrored the first in its lack of complications and the absence of discomfort for the patient, who was again discharged 48 hours post-procedure. The patient currently has no symptoms of chest tightness or dyspnea. At the 6-week follow-up, the patient exhibited a negative FOBT, normalization of hemoglobin levels to 12.9 g/dL, and endoscopic images demonstrated near complete resolution of vascular ectasias (Fig. ). GAVE, a rare disease characterized by dilated blood vessels in the antrum radiating to the pylorus, was first described by Rider et al in 1953. Advancements in endoscopic techniques have led to further characterization of GAVE in 1984, with Jabbari et al naming it “watermelon stomach” due to its characteristic pattern similar to the stripes observed on watermelons. The pathophysiology of GAVE remains elusive and it was thought to be associated with chronic liver disease and autoimmune and connective tissue diseases. The patient exhibited elevated total antinuclear antibodies levels, yet showed no evidence of diagnosable autoimmune and connective tissue diseases. Recent studies increasingly suggest a correlation between GAVE and metabolic syndrome, including obesity, hyperlipidemia, hypertension, diabetes, and nonalcoholic fatty liver disease. The patient’s obesity, hypertension, and hyperlipidemia lend further support to this association. Additionally, the patient presented with hypergastrinemia, but the role of gastrin in the etiology of GAVE remains controversial. GAVE can manifest in multiple patterns based on endoscopic findings, with the 3 common subtypes being “watermelon stomach,” “honeycomb stomach,” and the more recently described nodular GAVE, which in this case can be classified as “watermelon stomach.” Notably, approximately 40% of GAVE cases are endoscopically misdiagnosed, often confused with erythema, polyps, gastritis, and ulcers. Among the 3 GAVE categories, nodular GAVE is most susceptible to misclassification. Due to its endoscopic resemblance to hyperplastic polyps and the challenges in histological differentiation, nodular GAVE frequently receives incorrect diagnoses. While histopathological analysis can assist in accurate diagnosis, it is compromised by a high false-negative rate due to inadequate sample sizes, and thus, should not serve as the sole diagnostic criterion. The important value of endoscopic evaluation should not be overlooked. A year prior, the patient exhibited characteristic endoscopic signs of GAVE (Fig. ), yet the condition was misidentified as chronic erosive gastritis. Despite treated with medication, the patient experienced persistent anemia and positive FOBT over the subsequent year. Therefore, enhancing the diagnostic ability for GAVE and selecting efficacious therapeutic strategy are essential. The EGDs conducted 7 years ago (Fig. A) and 6 years ago (Fig. B) revealed no classic endoscopic indications of GAVE, only mild erythema was observed in the gastric antrum. This suggests that the patient’s gastric antral lesions have evolved over time, aligning with the concept that GAVE is an acquired disease. Regrettably, no histological biopsy was performed at that time. At present, no pharmacological agent has been conclusively demonstrated as effective for treating GAVE. Endoscopic intervention is still the cornerstone of management. The decision to treat and the choice of therapeutic approaches primarily hinges on the preferences of endoscopist. Previous endoscopic treatment modalities including heater probe, bipolar electrocoagulation, Nd-YAG laser and cryotherapy have been largely replaced by APC due to lower success rates and issues of availability. Nevertheless, large variability in endoscopic success rates and high recurrence rates limit the effectiveness of APC in clinical practice. EBL has recently emerged as an alternative, increasingly favored for its safety and efficacy. Studies have shown that patients treated with EBL experience a more significant endoscopic response than those treated with APC, benefiting from reduced posttreatment transfusion requirements and fewer follow-up interventions. Based on the histological characteristics of GAVE, which include dilated and tortuous submucosal veins accompanied by extensive mucosal lesions, it can be inferred that targeting deeper layers may yield improved outcomes. APC is a superficial technique act mostly on the mucosal layer, while EBL is a deep submucosal technique can act on the mucosal layer and submucosal layer. Neil et al suggest that EBL may be preferentially selected for more severe cases of GAVE, possibly because EBL affects deeper layers of the gastric wall, leading to thrombosis and ischemia of the mucosa and submucosa, thus providing a more comprehensive hemostatic effect. EBL is a well-established technique in clinical practice, with its safety and feasibility widely confirmed across various conditions, including esophageal varices, Dieulafoy lesions, angiodysplasia, blue rubber vesicle nevus syndrome and hemorrhoids. In the present case, EBL was utilized for GAVE during the first operation, employing a Multi-Band Ligator equipped with 10 bands. A follow-up EGD 6 weeks post-procedure showed that the lesion reduction was not particularly noticeable. Although there was an increase in the patient’s hemoglobin levels, but the FOBT remained positive. Therefore, we wanted to explore a more effective treatment modality. Combined endoscopic therapy have demonstrated advantages in managing esophageal varices, but reports on GAVE treatments are scarce. To date, only Chen et al reported that APC in conjunction with sclerotherapy can lower recurrence rate, while EBL plus sclerotherapy injection for treating GAVE has not yet been reported. After obtaining informed consent, we performed EBL for most of the gastric sinus lesions during the second operation, while lauromacrogol injections were administered to the relatively more severe lesions. Currently, blood routine reexamination at 6 weeks after procedure showed hemoglobin has risen to normal level, the FOBT was negative, and a follow-up EGD indicated considerable lesion shrinkage, denoting a satisfactory short-term outcome. However, its long-term efficacy still warrants further observation. The patient has been advised to undergo subsequent EGD, blood routine analysis, and FOBT after 6 months and 1 year to evaluate the long-term outcome. GAVE is a rare clinical disease which can cause severe upper gastrointestinal tract bleeding. Nonetheless, a consensus on the most effective treatment approach has yet to be established. The combination of EBL with lauromacrogol injection has shown a satisfactory short-term outcome, providing a new option for the endoscopic management of GAVE. However, its long-term efficacy still requires further observation. We sincerely appreciate the patient for her cooperation in information acquisition, treatment, and follow-up. Conceptualization: Qi Lin. Investigation: Yukai Chen. Methodology: Keke Sun. Project administration: Qi Lin. Writing – original draft: Linbo Chen. Writing – review & editing: Pingping Hu.
A Review of Herbal Medicine-Based Phytochemical of
b50d6032-024e-4c0c-9221-3122c5250241
9554952
Pharmacology[mh]
Data from Globocan showed that the new cases of breast cancer in 2020 were 11.7% and it became the highest incidence rate of cancer for both sexes in all ages. In addition, the International Agency for Research on Cancer (IARC) reported 2.1 million new cases of breast cancer in 2018. Breast cancer is also the top cause of cancer death in women worldwide, with 627,000 fatalities reported in 2018. Breast tumor subtyping is traditionally done using immunohistochemistry (IHC) markers such as “estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2).” These hormone and growth receptors, which are known to stimulate cell growth and survival signaling, are well-established therapeutic targets for breast cancer treatment and have been the focus of pharmacological research. Weinberg et al described six cancer hallmarks in 2000: maintaining proliferative signals, avoiding growth suppressors, resisting cell death, enabling replicative immortality, initiating angiogenesis, and activating invasion and metastasis. A wide range of intracellular chemicals have been discovered as causing cancer cells to proliferate uncontrollably. In malignant cells, for example, “cyclin-dependent kinase (CDK)” overexpression and tumor suppressor protein “(p53), BRCA1 and BRCA2, CDK inhibitors, p21, p27, and p57” downregulation have been discovered. , Protein control of pro-apoptotic “Bcl-2 family members, initiator caspase (e.g., caspase 8/9), effector caspase (e.g., caspase 3), and apoptosis” as a barrier to cancer formation. Several important receptors and signaling pathways have emerged as key players in the development and advancement of breast cancer. “The epidermal growth factor receptor (EGFR), HER2, and Vascular Endothelial Growth factor (VEGF)” are the most prevalent growth factor receptors that are overexpressed in breast cancer cells. These receptors may be activated by the Janus kinases, signal transducer and activator of transcription proteins “(JAK/STAT), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), mammalian target of rapamycin (mTOR), and mitogen-activated protein kinases (MAPK)” pathways. Furthermore tumor cells have been found to exhibit altered expression of many pro-inflammatory transcription factors, including “nuclear factor kpB (NFkpB), activating protein-1 (AP-1), and hypoxia-inducible factor 1 (HIF-1)”. Chronic inflammation is thought to play a role in both the start and development of cancer. As a result, addressing the major aberrant proteins and pathways is a promising strategy to breast cancer treatment. Garcinia is a Clusiaceae family genus with approximately 450 species found in tropical Asia, South Africa, and America, as well as Madagascar, New Guinea, and Polynesia. For example the fruits of Garcinia have been used in traditional medicine for a variety of purposes, including antifever infusions in Thai folk medicine, wound healing and treatment of peptic ulcers in Brazilian folk medicine, earache in Thai medicine, and ailments such as heat strokes, infections, and edema in the Ayurvedic system of medicine. “Polyisoprenylated benzophenones, polyphenols, bioflavonoids, xanthones, lactones, and triterpenes” are among the physiologically active metabolites found in the fruits, stem barks, seeds, leaves, and roots of numerous Garcinia species. As a result, Garcinia species have been shown to be abundant in compounds that have medicinal properties. Free-radical scavenging, antiulcer effects, cytotoxicity, nitric oxide synthase inhibition, cancer chemoprevention, induction of apoptosis, anti-HIV, and trypanocidal properties have been associated to these substances. This review aimed to analyze the potential of Garcinia phytochemicals as molecular therapy of breast cancer. This research is important to provide information concerning phytochemicals of Garcinia as an alternative treatment for patients with breast cancer. The articles were selected on the basis of inclusion studies published in the PubMed database; articles in English, available in full text and abstract form, consist of the keywords “Garcinia” and “breast cancer.” The sorting processes can be seen in . Globocan showed that new cases of breast cancer in 2020 were 2.261.419 (11.7%) and mortality was 684.996 deaths (6.9%), thus, breast cancer has become the highest incidence rate of cancer. Breast tumor subtypes are traditionally classified based on hormonal and growth factor response. In this context, the most clinically important receptors are “ER, PR, and HER2.” Cancer cells deregulate these hormonal and growth signals, allowing them to continue proliferative signaling in a variety of ways. They enhance cell surface receptor expression and accumulate activating mutations, resulting in cell surface receptor or downstream signaling pathway activation that is constant. Some Garcinia metabolites, such as “Garcinol, α-mangostin, Cambogin, and Gambogic acid” (GA) have exhibited anticancer action in vitro and in vivo, causing apoptosis and cellular cycle arrest, suppression of angiogenesis, and gene expression regulation in carcinogenic cells. The general pathways of phytochemicals of Garcinia mechanism in cancer targeted therapy can be seen in . α-Mangostin Garcinia mangostana extracts were found to contain the anticancer phytochemical α-Mangostin. Kurose et al discovered that α-mangostin induced mitochondrial apoptosis. Increased caspase-3, caspase-8, and caspase-9 activity, as well as increased cytochrome c protein release concentration, support this. The expression of CDK - interacting protein 1 (p21cip1) was upregulated, and Checkpoint Kinase 2 (CHEK2) was tended to increase, resulting in a decrease in CDKs and cyclins, as well as G1-phase arrest and inhibition of cell proliferation, followed by decreases in proliferating cell nuclear antigen (PCNA). When compared to proform-HER2 expression, Kritsanawong et al discovered that -Mangostin can reduce Phospho-HER2 (p-HER2) at Tyr1221/1222. This results in a decrease in Nuclear Factor NF-Bp 65, c-Rel, and c-Myc expression while increasing IB Kinase Complex Alpha (IKK) expression. However, activation of p38 and c-Jun N-Terminal Kinase 1/2 (JNK1/2) resulted in the expression of C/EBP Homologous Protein and c-Jun. Shibata et al revealed that α-mangostatin promotes mitochondrial apoptosis, G1-phase arrest, and S-phase suppression during the cell cycle. Akt phosphorylation was induced by α-mangostin treatment both in vitro and in vivo, demonstrating that α-mangostin significantly reduces the levels of phospho-Akt-threonine 308 (Thr308); α-Mangostin significantly increased caspase-3, caspase-8, and caspase-9 activity; cytochrome c protein levels in cytosolic fractions were significantly higher in cells treated with α-angostin; and caspase-8-Bid cleavage triggered the mitochondrial pathway. α-mangostin also activated caspases-8, −9, and −7, elevated Bax, p53, and cytosolic cytochrome c protein levels, and stimulated Poly (ADP-Ribose) Polymerase (PARP) cleavage while lowering Bid and Bcl-2 protein expression, according to Won et al. Furthermore, apoptosis-inducing factor (AIF) was transferred from the mitochondria to the cytosol and promoted apoptosis in E2-stimulated cells in parallel with non-stimulated cells, lowering the expression of ERa and pS2, an estrogen-responsive gene. According to Doi et al, isolated panaxanthone from G. mangostana dramatically boosted caspase-3, caspase-9, and caspase-8 activities, triggered the G1-phase, and lowered the number of cells in both the S- and G2/M-phases. α-mangostin reduced 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced MMP-2 and MMP-9 production, as well as cell invasion and migration , according to Lee et al, Cambogin Cambogin compounds can be found in the branches of Garcinia esculenta . Shen et al reported that cambogin treatment via the NOX enzyme is activated by enhancing p22phox and NADPH Oxidase 1 (NOX1) interaction. The dissociation of Thioredoxin 1 (Trx1) from the activation of the Apoptosis Signal-Regulating Kinase 1 (ASK1) pathway and the induction of mitochondrial network abnormalities resulted from an increase in intracellular and mitochondrial levels of Oxide (O2-) and Hydrogen Peroxide (H2O2), resulting in the dissociation of Thioredoxin 1 (Trx1) from the activation of the Apoptosis. According to Shen et al, activation of ASK-1, SAPK/Erk Kinase (SEK1/MKK4), MKK7, and Jun Amino-Terminal Kinases/Stress-Activated Protein Kinase (JNK/SAPK) is necessary for cambogin-induced Reactive Oxygen Species (ROS). Cambogin stimulated the caspase-independent mitochondrial apoptotic pathway, as evidenced by an increase in the ratio of B-Cell Lymphoma Protein 2, Associated X (Bax/Bcl-2), and nuclear translocation of AIF. JNK/SAPK or p38 MPAK activation phosphorylated Activating Transcription Factor 2 (ATF-2) and increased histone H3K9 trimethylation in the Bcl-2 gene promoter’s activator protein 1 (AP-1) binding region. Gambogic Acid The compound GA is abundant in Garcinia hanburyi . According to Wang et al, TNF-Related Apoptosis-Inducing Ligand (TRAIL) and GA increased apoptosis in TRAIL-resistant cells and play a critical role in inducing apoptosis and reducing levels of anti-apoptotic Bcl-2 protein, boosting the interplay of extrinsic and intrinsic apoptosis signaling. GA induces apoptosis, according to Zhou et al, who looked at changes in the expression levels of apoptosis-regulating proteins such as cleaved caspase-3, caspase −8, and caspase −9, as well as Bax, while Bcl-2 was decreased and Fas and Fas ligand (FasL) were increased. GA depolymerized microtubules and increased JNK1 and p38 phosphorylation, causing G2/M cell-cycle arrest and apoptosis, according to Chen et al, According to Wang et al, GA also increase the expression of apoptosis-related proteins FasL, caspase-3, caspase-8, caspase-9, and Bax while suppressing the anti-apoptotic protein Bcl-2. GA also caused PARP cleavage, caspase-3, caspase-8, and caspase-9 activation, and an increase in the Bax/Bcl-2 ratio, according to Li et al. Furthemore, GA caused apoptosis via the buildup of ROS and mitochondrial apoptotic pathway, as shown by AIF translocation and cytochrome c (Cyt c) release from mitochondria. By decreasing Akt/mTOR signaling, GA also reduced cell survival. GA also prevented tumor invasion and metastasis by reducing MMP-2 and MMP-9 activity, according to Qi et al. , Garcinol Garcinol compound can be found in Garcinia morella and Garcinia indica . Choudhury et al reported that Garcinol inhibited the complex polysaccharides like Lipopolysaccharide (LPS) induced increase in cytokine secretion such as Tumor Necrosis Factor Alpha (TNF-α), Interleukin 1 Beta (IL- 1β) by macrophages as inflammatory agent. Ahmad et al showed that garcinol causes Mesenchymal-Epithelial Transition (MET) in aggressive breast cancer cells through apoptosis mediated by downregulation of the NF-kB signaling pathway. This is in line with the mesenchymal markers vimentin, ZEB1, and ZEB being downregulated and the epithelial marker E-cadherin being increased, as well as the miRNAs, miR-200, and let-7 families being implicated in the maintenance and control of EMT to MET. The results also show that garcinol has an effect on the Wnt signaling pathway, causing β-catenin to translocate to the nucleus. There is crosstalk between the NF-kB and Wnt signaling pathways when the phosphorylated form of β-catenin increases. GSK-3, the phosphorylation factor for β-catenin, was discovered to be increased, causing β-catenin nuclear translocation to be inhibited and, as a result, Wnt signaling pathways to be inhibited. Garcinol increased Taxol-induced antimitotic activity, reduced caspase-3/iPLA2-stimulated cell repopulation and prevented NF-kB/Twist1-derived pro-inflammatory signaling and pro-metastatic properties, according to Tu et al. Chen et al discovered that Garcinol-induced 9-nAChR downregulation may have a direct impact on cyclin D3 gene expression transcriptional regulation. Garcinol inhibits the progression of the cell cycle in human breast cancer cells through regulating the cyclin D3 gene. According to Ahmad et al, garcinol suppressed IL-6-induced STAT-3 phosphorylation as well as the synthesis of urokinase-type plasminogen activator (uPA), VEGF, and matrix metalloproteinase-9 (MMP-9) activator, reducing cell invasion and aggressiveness. Garcinol inhibited IL-6-induced STAT-3 phosphorylation and production of urokinase-type plasminogen activator (uPA), VEGF, and matrix metalloproteinase-9 (MMP-9) activator, which reduced cell invasion and aggressiveness, according to Ahmad et al. Induction of caspase-mediated apoptosis (Caspase 3, Caspase 9) was also added by Ahmad et al, as evidenced by PARP cleavage. Apoptosis is induced by inactivation of NF-kB signaling and downregulation of its target genes. According to Ye et al, garcinol suppressed 17-Estradiol (E2), which elevated ac-H3, ac-H4, and NF-κB/ac-p65 levels. Nuclear translocation of NF-κB/p65, as well as cyclin D1, Bcl-2, and Bcl-xl mRNA and protein expression levels, were decreased in E2-treated cells. In the NF-κB pathway, reduced ac-p65 protein expression is hypothesized to be connected to downregulation of cyclin D1, Bcl-2, and Bcl-xl expression. Griffipavixanthone (GPX) Griffipavixanthone (GPX) is found in Garcinia oblongifolia . According to Ma et al, GPX cleaves caspase-8/9, and PARP GPX increased the mRNA level of the p53 gene and its target genes, and changed Bax expression while Bcl-2 decreased in mitochondria by releasing cytochrome c. 40 Friedolanostane Triterpenoid Garcinia celebica fruits contain the triterpenoid compound friedolanostane . Subarnas et al discovered that a compound inhibited the oncogenic protein Akt, resulting in an increase in PARP. Hexane The fruits of Garcinia quaesita consist of Hexane . Pathiranage et al reported that the compound increased the activity of caspase 3/7, increased Bax, and decreased Baculoviral Inhibitor of Apoptosis Repeat Containing 5 (BIRC-5). Neobractatin (NBT) Neobractatin (NBT) was found in Garcinia bracteata extract , which inhibited metastasis by decreasing the expressions of pAKT, the EMT markers vimentin and cofilin, and Matrix Metalloproteinase 2 (MMP2). 7-Epiclusianone (7-Epi) The extract of Garcinia gardneriana fruits contains 7-Epiclusianone (7-Epi), which enhances the BAX/BCL-2 ratio when cells accumulate in the G0/G1 phase . 7-Epi reduced the expression of CDK Inhibitor 1A (CDKN1A (p21)) and cyclin E in both cell lines, while decreasing the expression of cyclin D1 and p-ERK in the MCF-7 cell line. S1 (the Regioisomeric Mixture of Xanthochymol and Guttiferone E) and S2 (the Regioisomeric Mixture of Isoxanthochymol and Cycloxanthochymol These compounds are found in Garcinia xanthochymus . According to Xu et al, S1 and S2 reduced the phosphorylation of STAT3’s upstream kinases, Janus Kinase 2 (JAK2) and Src, as well as the expression of various STAT3-regulated genes, including anti-apoptotic (Bcl-XL, Mcl-1, and survivin), proliferative (cyclin D1), and angiogenic (VEGF) genes. The result of this review showed several pieces of research that reported the use of Garcinia phytochemicals as molecular therapy for breast cancer, as seen in . Garcinia mangostana extracts were found to contain the anticancer phytochemical α-Mangostin. Kurose et al discovered that α-mangostin induced mitochondrial apoptosis. Increased caspase-3, caspase-8, and caspase-9 activity, as well as increased cytochrome c protein release concentration, support this. The expression of CDK - interacting protein 1 (p21cip1) was upregulated, and Checkpoint Kinase 2 (CHEK2) was tended to increase, resulting in a decrease in CDKs and cyclins, as well as G1-phase arrest and inhibition of cell proliferation, followed by decreases in proliferating cell nuclear antigen (PCNA). When compared to proform-HER2 expression, Kritsanawong et al discovered that -Mangostin can reduce Phospho-HER2 (p-HER2) at Tyr1221/1222. This results in a decrease in Nuclear Factor NF-Bp 65, c-Rel, and c-Myc expression while increasing IB Kinase Complex Alpha (IKK) expression. However, activation of p38 and c-Jun N-Terminal Kinase 1/2 (JNK1/2) resulted in the expression of C/EBP Homologous Protein and c-Jun. Shibata et al revealed that α-mangostatin promotes mitochondrial apoptosis, G1-phase arrest, and S-phase suppression during the cell cycle. Akt phosphorylation was induced by α-mangostin treatment both in vitro and in vivo, demonstrating that α-mangostin significantly reduces the levels of phospho-Akt-threonine 308 (Thr308); α-Mangostin significantly increased caspase-3, caspase-8, and caspase-9 activity; cytochrome c protein levels in cytosolic fractions were significantly higher in cells treated with α-angostin; and caspase-8-Bid cleavage triggered the mitochondrial pathway. α-mangostin also activated caspases-8, −9, and −7, elevated Bax, p53, and cytosolic cytochrome c protein levels, and stimulated Poly (ADP-Ribose) Polymerase (PARP) cleavage while lowering Bid and Bcl-2 protein expression, according to Won et al. Furthermore, apoptosis-inducing factor (AIF) was transferred from the mitochondria to the cytosol and promoted apoptosis in E2-stimulated cells in parallel with non-stimulated cells, lowering the expression of ERa and pS2, an estrogen-responsive gene. According to Doi et al, isolated panaxanthone from G. mangostana dramatically boosted caspase-3, caspase-9, and caspase-8 activities, triggered the G1-phase, and lowered the number of cells in both the S- and G2/M-phases. α-mangostin reduced 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced MMP-2 and MMP-9 production, as well as cell invasion and migration , according to Lee et al, Cambogin compounds can be found in the branches of Garcinia esculenta . Shen et al reported that cambogin treatment via the NOX enzyme is activated by enhancing p22phox and NADPH Oxidase 1 (NOX1) interaction. The dissociation of Thioredoxin 1 (Trx1) from the activation of the Apoptosis Signal-Regulating Kinase 1 (ASK1) pathway and the induction of mitochondrial network abnormalities resulted from an increase in intracellular and mitochondrial levels of Oxide (O2-) and Hydrogen Peroxide (H2O2), resulting in the dissociation of Thioredoxin 1 (Trx1) from the activation of the Apoptosis. According to Shen et al, activation of ASK-1, SAPK/Erk Kinase (SEK1/MKK4), MKK7, and Jun Amino-Terminal Kinases/Stress-Activated Protein Kinase (JNK/SAPK) is necessary for cambogin-induced Reactive Oxygen Species (ROS). Cambogin stimulated the caspase-independent mitochondrial apoptotic pathway, as evidenced by an increase in the ratio of B-Cell Lymphoma Protein 2, Associated X (Bax/Bcl-2), and nuclear translocation of AIF. JNK/SAPK or p38 MPAK activation phosphorylated Activating Transcription Factor 2 (ATF-2) and increased histone H3K9 trimethylation in the Bcl-2 gene promoter’s activator protein 1 (AP-1) binding region. The compound GA is abundant in Garcinia hanburyi . According to Wang et al, TNF-Related Apoptosis-Inducing Ligand (TRAIL) and GA increased apoptosis in TRAIL-resistant cells and play a critical role in inducing apoptosis and reducing levels of anti-apoptotic Bcl-2 protein, boosting the interplay of extrinsic and intrinsic apoptosis signaling. GA induces apoptosis, according to Zhou et al, who looked at changes in the expression levels of apoptosis-regulating proteins such as cleaved caspase-3, caspase −8, and caspase −9, as well as Bax, while Bcl-2 was decreased and Fas and Fas ligand (FasL) were increased. GA depolymerized microtubules and increased JNK1 and p38 phosphorylation, causing G2/M cell-cycle arrest and apoptosis, according to Chen et al, According to Wang et al, GA also increase the expression of apoptosis-related proteins FasL, caspase-3, caspase-8, caspase-9, and Bax while suppressing the anti-apoptotic protein Bcl-2. GA also caused PARP cleavage, caspase-3, caspase-8, and caspase-9 activation, and an increase in the Bax/Bcl-2 ratio, according to Li et al. Furthemore, GA caused apoptosis via the buildup of ROS and mitochondrial apoptotic pathway, as shown by AIF translocation and cytochrome c (Cyt c) release from mitochondria. By decreasing Akt/mTOR signaling, GA also reduced cell survival. GA also prevented tumor invasion and metastasis by reducing MMP-2 and MMP-9 activity, according to Qi et al. , Garcinol compound can be found in Garcinia morella and Garcinia indica . Choudhury et al reported that Garcinol inhibited the complex polysaccharides like Lipopolysaccharide (LPS) induced increase in cytokine secretion such as Tumor Necrosis Factor Alpha (TNF-α), Interleukin 1 Beta (IL- 1β) by macrophages as inflammatory agent. Ahmad et al showed that garcinol causes Mesenchymal-Epithelial Transition (MET) in aggressive breast cancer cells through apoptosis mediated by downregulation of the NF-kB signaling pathway. This is in line with the mesenchymal markers vimentin, ZEB1, and ZEB being downregulated and the epithelial marker E-cadherin being increased, as well as the miRNAs, miR-200, and let-7 families being implicated in the maintenance and control of EMT to MET. The results also show that garcinol has an effect on the Wnt signaling pathway, causing β-catenin to translocate to the nucleus. There is crosstalk between the NF-kB and Wnt signaling pathways when the phosphorylated form of β-catenin increases. GSK-3, the phosphorylation factor for β-catenin, was discovered to be increased, causing β-catenin nuclear translocation to be inhibited and, as a result, Wnt signaling pathways to be inhibited. Garcinol increased Taxol-induced antimitotic activity, reduced caspase-3/iPLA2-stimulated cell repopulation and prevented NF-kB/Twist1-derived pro-inflammatory signaling and pro-metastatic properties, according to Tu et al. Chen et al discovered that Garcinol-induced 9-nAChR downregulation may have a direct impact on cyclin D3 gene expression transcriptional regulation. Garcinol inhibits the progression of the cell cycle in human breast cancer cells through regulating the cyclin D3 gene. According to Ahmad et al, garcinol suppressed IL-6-induced STAT-3 phosphorylation as well as the synthesis of urokinase-type plasminogen activator (uPA), VEGF, and matrix metalloproteinase-9 (MMP-9) activator, reducing cell invasion and aggressiveness. Garcinol inhibited IL-6-induced STAT-3 phosphorylation and production of urokinase-type plasminogen activator (uPA), VEGF, and matrix metalloproteinase-9 (MMP-9) activator, which reduced cell invasion and aggressiveness, according to Ahmad et al. Induction of caspase-mediated apoptosis (Caspase 3, Caspase 9) was also added by Ahmad et al, as evidenced by PARP cleavage. Apoptosis is induced by inactivation of NF-kB signaling and downregulation of its target genes. According to Ye et al, garcinol suppressed 17-Estradiol (E2), which elevated ac-H3, ac-H4, and NF-κB/ac-p65 levels. Nuclear translocation of NF-κB/p65, as well as cyclin D1, Bcl-2, and Bcl-xl mRNA and protein expression levels, were decreased in E2-treated cells. In the NF-κB pathway, reduced ac-p65 protein expression is hypothesized to be connected to downregulation of cyclin D1, Bcl-2, and Bcl-xl expression. Griffipavixanthone (GPX) is found in Garcinia oblongifolia . According to Ma et al, GPX cleaves caspase-8/9, and PARP GPX increased the mRNA level of the p53 gene and its target genes, and changed Bax expression while Bcl-2 decreased in mitochondria by releasing cytochrome c. 40 Garcinia celebica fruits contain the triterpenoid compound friedolanostane . Subarnas et al discovered that a compound inhibited the oncogenic protein Akt, resulting in an increase in PARP. The fruits of Garcinia quaesita consist of Hexane . Pathiranage et al reported that the compound increased the activity of caspase 3/7, increased Bax, and decreased Baculoviral Inhibitor of Apoptosis Repeat Containing 5 (BIRC-5). Neobractatin (NBT) was found in Garcinia bracteata extract , which inhibited metastasis by decreasing the expressions of pAKT, the EMT markers vimentin and cofilin, and Matrix Metalloproteinase 2 (MMP2). The extract of Garcinia gardneriana fruits contains 7-Epiclusianone (7-Epi), which enhances the BAX/BCL-2 ratio when cells accumulate in the G0/G1 phase . 7-Epi reduced the expression of CDK Inhibitor 1A (CDKN1A (p21)) and cyclin E in both cell lines, while decreasing the expression of cyclin D1 and p-ERK in the MCF-7 cell line. These compounds are found in Garcinia xanthochymus . According to Xu et al, S1 and S2 reduced the phosphorylation of STAT3’s upstream kinases, Janus Kinase 2 (JAK2) and Src, as well as the expression of various STAT3-regulated genes, including anti-apoptotic (Bcl-XL, Mcl-1, and survivin), proliferative (cyclin D1), and angiogenic (VEGF) genes. The result of this review showed several pieces of research that reported the use of Garcinia phytochemicals as molecular therapy for breast cancer, as seen in . On the basis of this review, it can be concluded that Garcinia phytochemical compounds are potential as molecular therapy for breast cancer and have low toxicity to normal cells. This result can be used as an alternative for minimally invasive therapy for patients with breast cancer, since chemotherapy agents have many adverse side effects on healthy cells. The result confirms α-mangostin, Cambogin, GA, Garcinol, Griffipavixanthone, Friedolanostane triterpenoid, Hexane, Neobractatin, 7-Epiclusianone, xanthochymol - guttiferone E, and isoxanthochymol - cycloxanthochymol have anticancer properties including apoptosis, inhibition of proliferation and, metastasis. These phytochemicals can be used as candidates for molecular therapy to improve the health and life expectancy of patients with breast cancer.
Effects of previous arthroscopic knee surgery on the outcomes of primary total knee arthroplasty: a systematic review and PRISMA-compliant meta-analysis
17914b9e-e14f-480d-926e-81fb37da92ad
11871776
Surgical Procedures, Operative[mh]
Knee osteoarthritis (OA) is a prevalent and disabling condition affecting a substantial portion of the aging population, with projections indicating an increase in its prevalence aligned with global aging trends . Initially, the condition is managed through conservative interventions, such as physical therapy, pharmacologic treatments, and lifestyle modifications . However, as OA progresses, conservative measures are often rendered ineffective, necessitating surgical intervention for many patients . Total knee arthroplasty (TKA) is the gold standard for managing end-stage knee OA, providing significant pain relief and functional restoration with favourable long-term outcomes . The influence of prior arthroscopic knee surgery on TKA outcomes remains a subject of scientific debate. Knee arthroscopy (KA), commonly performed to address meniscal tears, loose bodies, or other intra-articular issues, is hypothesised to impact the outcomes of subsequent TKA due to its potential effects on knee joint structures . Although numerous studies, including several meta-analyses, have explored this relationship, a clear consensus on whether prior KA adversely impacts TKA outcomes has not been reached. Several studies have indicated that patients with a history of KA have higher rates of postoperative complications, including infections, stiffness, and revision procedures, alongside inferior functional recovery . The proposed mechanisms underlying these findings include intra-articular adhesions, altered joint biomechanics, or iatrogenic cartilage and bone damage resulting from arthroscopic procedures . In contrast, other studies have reported no significant associations between prior KA and TKA outcomes, suggesting that previous arthroscopic intervention does not influence postoperative recovery or implant longevity . The authors of these studies have hypothesised that the minimally invasive nature of KA likely minimises disruption to bone structures essential for TKA, resulting in comparable postoperative outcomes to those seen in patients without prior KA . Of note, several studies have suggested that the interval between KA and TKA could play a moderating role in TKA outcomes, with shorter intervals potentially linked to less favourable results, as incomplete recovery from the prior procedure may influence subsequent surgical outcomes . This lack of consensus highlights the critical need for further investigation into the relationship between prior KA and TKA outcomes. Based on existing hypotheses, it was anticipated that patients with a history of KA would exhibit distinct postoperative outcomes relative to those without. Specifically, it was hypothesised that prior KA would be associated with an increased risk of postoperative complications, reduced functional outcomes, and a higher revision rate following TKA. The present systematic review and meta-analysis was conducted to deliver a comprehensive and updated synthesis of the available literature in order to gain evidence-based insights for clinical decision-making in the management of patients with advanced knee OA. A systematic review of the scientific literature was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist . This review was registered in PROSPERO under the registration number CRD42024562998. The PICO (Population, Intervention, Comparison, Outcome) strategy was employed to formulate a precise search approach. The target population comprised patients diagnosed with advanced knee OA, with older age groups being predominant as knee OA is most common in these age groups. The intervention under investigation was prior arthroscopic knee surgery performed before TKA. The comparison group included patients who underwent TKA without any history of prior arthroscopic surgery. The primary outcomes included postoperative complications, functional recovery, pain relief, joint stability, and the rate of revision surgery. The keywords used to conduct the search are presented in Table . The computerized search strategy utilized several databases, including PubMed, Embase, Cochrane Library, Wanfang Database, and China National Knowledge Infrastructure (CNKI). The most recent search was conducted on October 20, 2024. Tailored modification of the search strategy was required for each database, and no language restrictions were applied. Additionally, the reference lists of the studies identified in the database searches, as well as those of pertinent reviews, were manually examined to identify further studies for potential inclusion. Details of the search strategies and results can be found in Supplementary Table . Inclusion and exclusion criteria The inclusion criteria included: randomized controlled trials and retrospective studies of the effects of KA on the prognosis of TKA from both domestic and international sources; the study comprised knee OA patients undergoing TKA surgery; the study comprised a KA group (those with a history of KA prior to TKA) and a non-KA group (those without a history of KA before TKA); the outcome measures included the TKA revision rate, reoperation rate, stiffness rate, prosthetic joint infection (PJI) rate, venous thromboembolism (VTE) incidence, postoperative range of motion (ROM), and Knee Society Score (KSS), among others. The exclusion criteria included: patients with a history of open knee surgery or fractures; studies that did not evaluate postoperative indicators or did not compare the two groups; incomplete data or literature for which the full text was unavailable; reviews, editorials, letters, conference abstracts, or case reports. Data extraction Two researchers conducted independent screening of the identified studies and extracted the relevant data from the included studies, resolving any disagreements through discussion or by consulting a third party. The screening process strictly adhered to the above criteria. Priority was given to recent or high-impact factor publications in cases of duplicate authorship or research centre publications. The collected data encompassed the study details, patient characteristics, interval between previous arthroscopy and joint arthroplasty, follow-up time, effect size, and adjustment variables. If there were any uncertainties in the data, the authors were contacted for clarification. Quality evaluation Quality assessment of the cohort studies was conducted using the Newcastle-Ottawa Scale (NOS) , which assigns scores ranging from 0 (highest bias risk) to 9 (lowest bias risk). Disagreements were resolved through consensus. The risk of bias was then categorized as high (0–3), medium (4–6), or low (7–9) . Data synthesis and statistical analysis The mean difference and standard deviation were used in the assessment of continuous functional outcomes. For skewed data, the median and interquartile range were employed, according to Wan et al.‘s method . The results are reported as 95% confidence intervals (CIs) using either the weighted mean difference (WMD) or standardized mean difference (SMD). For binary outcomes, the relative risk (RR) was extracted or calculated. Heterogeneity among effect sizes was evaluated using chi-square tests. A fixed effects model was used when homogeneity was observed ( p > 0.1 and I 2 < 50%), while a random effects model was employed when significant heterogeneity was present ( p < 0.1 and I 2 ≥ 50%). Subgroup analyses were conducted in cases of substantial heterogeneity. Sensitivity analysis was performed to assess the stability of the results, and publication bias was evaluated using Egger’s test when data were available from more than five studies. The statistical analyses were conducted using Stata 12.0, with a significance level of p < 0.05. The inclusion criteria included: randomized controlled trials and retrospective studies of the effects of KA on the prognosis of TKA from both domestic and international sources; the study comprised knee OA patients undergoing TKA surgery; the study comprised a KA group (those with a history of KA prior to TKA) and a non-KA group (those without a history of KA before TKA); the outcome measures included the TKA revision rate, reoperation rate, stiffness rate, prosthetic joint infection (PJI) rate, venous thromboembolism (VTE) incidence, postoperative range of motion (ROM), and Knee Society Score (KSS), among others. The exclusion criteria included: patients with a history of open knee surgery or fractures; studies that did not evaluate postoperative indicators or did not compare the two groups; incomplete data or literature for which the full text was unavailable; reviews, editorials, letters, conference abstracts, or case reports. Two researchers conducted independent screening of the identified studies and extracted the relevant data from the included studies, resolving any disagreements through discussion or by consulting a third party. The screening process strictly adhered to the above criteria. Priority was given to recent or high-impact factor publications in cases of duplicate authorship or research centre publications. The collected data encompassed the study details, patient characteristics, interval between previous arthroscopy and joint arthroplasty, follow-up time, effect size, and adjustment variables. If there were any uncertainties in the data, the authors were contacted for clarification. Quality assessment of the cohort studies was conducted using the Newcastle-Ottawa Scale (NOS) , which assigns scores ranging from 0 (highest bias risk) to 9 (lowest bias risk). Disagreements were resolved through consensus. The risk of bias was then categorized as high (0–3), medium (4–6), or low (7–9) . The mean difference and standard deviation were used in the assessment of continuous functional outcomes. For skewed data, the median and interquartile range were employed, according to Wan et al.‘s method . The results are reported as 95% confidence intervals (CIs) using either the weighted mean difference (WMD) or standardized mean difference (SMD). For binary outcomes, the relative risk (RR) was extracted or calculated. Heterogeneity among effect sizes was evaluated using chi-square tests. A fixed effects model was used when homogeneity was observed ( p > 0.1 and I 2 < 50%), while a random effects model was employed when significant heterogeneity was present ( p < 0.1 and I 2 ≥ 50%). Subgroup analyses were conducted in cases of substantial heterogeneity. Sensitivity analysis was performed to assess the stability of the results, and publication bias was evaluated using Egger’s test when data were available from more than five studies. The statistical analyses were conducted using Stata 12.0, with a significance level of p < 0.05. Selection of studies A comprehensive search was performed across the PubMed, Embase, Cochrane Library, Wanfang Data, and CNKI databases. The search identified a total of 1770 records. After removing 243 duplicate records, 1527 unique articles remained for initial screening. The titles and abstracts of these articles were reviewed, resulting in the exclusion of 1501 articles that did not meet the inclusion criteria, primarily due to irrelevance to the study’s scope. This preliminary screening yielded 26 articles deemed potentially eligible for full-text review. Of these, two articles were excluded due to unavailability of the full text. The remaining 24 articles were thoroughly evaluated against the detailed eligibility criteria. During this review, 12 articles were excluded for the following reasons: three were review articles, two were conference abstracts, one lacked the specific data required for analysis, three did not investigate outcomes related to TKA, two did not focus on arthroscopic surgery (e.g., in Watters’s study , it was unclear whether open surgery or arthroscopic surgery was used for anterior cruciate ligament repair), and two contained duplicate or insufficient data . Ultimately, 11 studies met the inclusion criteria and were included in the quantitative synthesis for meta-analysis. The entire selection process, including each stage of screening and exclusion, is illustrated in Fig. , according to the PRISMA guidelines. This systematic approach ensured the inclusion of relevant studies to enhance the validity and reliability of the meta-analysis findings. Study and patient characteristics The 11 included studies , detailed in Table , were published between 2009 and 2024, and predominantly originated from the United States ( n = 6), with contributions from China, Brazil, England, and France. Ten studies were retrospective cohort analyses, utilizing registry or institutional databases, while one was a prospective study. Together, the studies comprised 194,367 patients undergoing primary TKA, with 13,086 (6.7%) having prior knee KA. The patients ranged in age from 18 to 95 years, with mean ages between 56 and 72 years. Overall, over 55% of patients were female, reaching up to 91.7% in some control groups . The body mass index (BMI) values averaged between 27 and 33 kg/m², indicating that most patients were overweight or obese. This is common among knee OA cases. The interval between KA and subsequent TKA varied significantly, from less than three months to four years, as reported in eight studies. The follow-up period post-TKA ranged from 90 days to 8 years, allowing for the assessment of both immediate and long-term outcomes. The postoperative complications that were evaluated included infection, stiffness, VTE, and the need for manipulation under anaesthesia (MUA). The findings were mixed: studies such as those by Piedade et al. and Gu et al. reported increased postoperative complications and reduced TKA survival with prior arthroscopy, while studies such as those by Issa et al. and Xu et al. found no significant negative impacts. The methodological quality of the included studies was assessed using the NOS and is presented in Table . The scores on this scale ranged from 7 to 9, indicating an overall low risk of bias. However, the area with the greatest risk of bias was the comparability between groups, as most studies did not adequately adjust for important confounders that could have influenced the results (see Supplementary Table ). Meta-analysis Postoperative functional improvement Four studies compared preoperative and postoperative ROM, while six others evaluated functional improvement scores. The meta-analysis showed no significant difference in ROM between patients with and without prior arthroscopy (mean difference − 0.61, 95% CI -3.48 to 2.26; I 2 = 62.4%; random effects model; 4 studies; n = 362 in the arthroscopic group, n = 1542 in the control group; Supplementary Fig. A). Sensitivity analysis upheld these findings (Supplementary Fig. A) and there was no evidence of publication bias (Table ). Functional improvement, measured mainly through the KSS, was marginally lower in the arthroscopic group, albeit not significantly (SMD − 0.075, 95% CI -0.186 to -0.037; p = 0.081; I 2 = 46.7%; random effects model; 7 studies; n = 1067 in the arthroscopic group, n = 4067 in the control group; Supplementary Fig. B). Subgroup analysis based on the interval between arthroscopy and TKA indicated poorer outcomes for intervals less than one year, though this was based on a single study. The stability of the results was confirmed through sensitivity analysis (Supplementary Fig. B), and no publication bias was detected (Table ). Postoperative complications Joint stiffness The analysis included six articles and seven data sets, with a total of 6,699 cases in the experimental group and 173,748 cases in the control group. A meta-analysis of the data revealed significant interstudy heterogeneity ( p = 0.001; I 2 = 73.1%), and a random effects model was employed. The results revealed no statistically significant difference between the two groups (RR 1.354, 95% CI 0.881 to 0.081; p = 0.167), indicating that knee arthroscopy did not increase the risk of stiffness after subsequent TKA. Subgroup analysis also indicated that the incidence of postoperative stiffness was not increased when a TKA was performed within one year or one year after receiving an arthroscopy (Supplementary Fig. C). Sensitivity analysis confirmed the stability of these results (Supplementary Fig. C), and Egger’s test indicated no publication bias (Table ). Periprosthetic fractures A total of four studies were included in this analysis, with 3,664 patients in the experimental group and 136,085 patients in the control group. The meta-analysis revealed significant heterogeneity among the four studies ( p < 0.001; I 2 = 81.2%), leading to the use of a random effects model. The analysis showed no significant difference between the two groups (RR -0.86, 95%CI 0.13 to 5.54; p = 0.876). These findings suggest that knee arthroscopy did not increase the risk of periprosthetic fracture after TKA. Subgroup analysis supported this finding, indicating that arthroscopy performed within one year after TKA and after one year did not increase the incidence of postoperative fracture around the prosthesis (Supplementary Fig. D). Sensitivity analysis further confirmed the stability of these results (Fig. D). VTE Two articles were included in this analysis , with a total of three groups of data. Given that the heterogeneity between the study results was not large ( p = 0.662; I 2 = 0.00%), a fixed effects model was used for the meta-analysis. The results showed that the incidence of VTE between the two groups was not significantly different (RR 1.06, 95% CI 0.83 to 1.35; p = 0.662). Likewise, the subgroup analysis showed that arthroscopy within one year and one year after KA did not increase the incidence of postoperative VTE (Supplementary Fig. E). Sensitivity analysis confirmed that this finding was relatively stable, verifying the reliability of the results (Fig. E). Aseptic loosening Eight studies , encompassing eight sets of data, were incorporated into the meta-analysis of aseptic loosening; there were 9,433 cases in the experimental group and 143,420 cases in the control group. The meta-analysis of the included data revealed considerable heterogeneity among the studies ( p < 0.001; I 2 = 75.9%); thus, a random effects model was employed. The comparison between the two groups yielded no statistically significant difference (RR 1.542, 95% CI 0.876 to 2.716; p = 0.134), suggesting that arthroscopic surgery of the knee did not increase the risk of aseptic loosening following TKA (Supplementary Fig. F). Sensitivity analysis indicated that the results were relatively stable (Supplementary Fig. F), and Egger’s test suggested the absence of publication bias (Table ). PJI A total of eight articles were included in this analysis, comprising 10 groups of data. The experimental group consisted of 12,377 cases, while the control group consisted of 178,523 cases. The meta-analysis revealed a lack of heterogeneity ( p = 0.622; I 2 = 0.0%), and a fixed effects model was used. The difference between the two groups was statistically significant (RR 1.317, 95%CI 1.165 to 1.488; p < 0.01). The results indicated that KA increased the risk of artificial joint infection after TKA (Fig. ). Subgroup analysis indicated that within one year of TKA, arthroscopy increased the risk of PJI (RR 1.314, 95% CI 1.156 to 1.493; p < 0.01). Sensitivity analysis confirmed the stability of the results (Supplementary Fig. G), and an Egger’s test suggested no publication bias (Table ). MUA A total of three articles were included in this analysis, with 9,066 cases in the experimental group and 141,370 cases in the control group. The results of the meta-analysis showed heterogeneity among the studies ( p < 0.001; I2 = 89.8%), and a random effects model was used. A statistically significant difference was found between the two groups (RR 1.761, 95%CI 1.140 to 2.719; p = 0.011). The results suggested that manipulation under anaesthesia in experimental group was increased after TKA (Fig. ). Sensitivity analysis indicated that the results were relatively stable (when excluding Sax 2022 , the combined results no longer remained statistically significant (RR 15.514, 95% CI 0.075 to 3196.22)) (Supplementary Fig. H). Meta-analysis of the revision rate A total of eight studies were included in this analysis, with a total of nine sets of data. The follow-up period ranged from 2 to 8.7 ± 2.5 years. The results revealed heterogeneity between the studies ( p = 0.166; I 2 = 30.40%), and a fixed effects model was employed. The analysis revealed a statistically significant difference between the two groups (RR 1.423, 95%CI 1.280 to 1.583; p < 0.01). Specifically, KA was associated with an increased revision rate after TKA. Subgroup analyses indicated that knee arthroplasty performed within one year after arthroscopy was associated with a higher rate of revision after surgery, particularly at one year (Fig. ). Sensitivity analysis showed that the combined effect size changed significantly when Sax 2022 was removed, but the results remained statistically significant (RR 1.623, 95%CI 1.351 to 1.950; p < 0.001) (Supplementary Fig. I). This suggests that these meta-analysis results are relatively robust and not overly influenced by the number of studies. Additionally, Egger’s test suggested no publication bias (Table ). A comprehensive search was performed across the PubMed, Embase, Cochrane Library, Wanfang Data, and CNKI databases. The search identified a total of 1770 records. After removing 243 duplicate records, 1527 unique articles remained for initial screening. The titles and abstracts of these articles were reviewed, resulting in the exclusion of 1501 articles that did not meet the inclusion criteria, primarily due to irrelevance to the study’s scope. This preliminary screening yielded 26 articles deemed potentially eligible for full-text review. Of these, two articles were excluded due to unavailability of the full text. The remaining 24 articles were thoroughly evaluated against the detailed eligibility criteria. During this review, 12 articles were excluded for the following reasons: three were review articles, two were conference abstracts, one lacked the specific data required for analysis, three did not investigate outcomes related to TKA, two did not focus on arthroscopic surgery (e.g., in Watters’s study , it was unclear whether open surgery or arthroscopic surgery was used for anterior cruciate ligament repair), and two contained duplicate or insufficient data . Ultimately, 11 studies met the inclusion criteria and were included in the quantitative synthesis for meta-analysis. The entire selection process, including each stage of screening and exclusion, is illustrated in Fig. , according to the PRISMA guidelines. This systematic approach ensured the inclusion of relevant studies to enhance the validity and reliability of the meta-analysis findings. The 11 included studies , detailed in Table , were published between 2009 and 2024, and predominantly originated from the United States ( n = 6), with contributions from China, Brazil, England, and France. Ten studies were retrospective cohort analyses, utilizing registry or institutional databases, while one was a prospective study. Together, the studies comprised 194,367 patients undergoing primary TKA, with 13,086 (6.7%) having prior knee KA. The patients ranged in age from 18 to 95 years, with mean ages between 56 and 72 years. Overall, over 55% of patients were female, reaching up to 91.7% in some control groups . The body mass index (BMI) values averaged between 27 and 33 kg/m², indicating that most patients were overweight or obese. This is common among knee OA cases. The interval between KA and subsequent TKA varied significantly, from less than three months to four years, as reported in eight studies. The follow-up period post-TKA ranged from 90 days to 8 years, allowing for the assessment of both immediate and long-term outcomes. The postoperative complications that were evaluated included infection, stiffness, VTE, and the need for manipulation under anaesthesia (MUA). The findings were mixed: studies such as those by Piedade et al. and Gu et al. reported increased postoperative complications and reduced TKA survival with prior arthroscopy, while studies such as those by Issa et al. and Xu et al. found no significant negative impacts. The methodological quality of the included studies was assessed using the NOS and is presented in Table . The scores on this scale ranged from 7 to 9, indicating an overall low risk of bias. However, the area with the greatest risk of bias was the comparability between groups, as most studies did not adequately adjust for important confounders that could have influenced the results (see Supplementary Table ). Postoperative functional improvement Four studies compared preoperative and postoperative ROM, while six others evaluated functional improvement scores. The meta-analysis showed no significant difference in ROM between patients with and without prior arthroscopy (mean difference − 0.61, 95% CI -3.48 to 2.26; I 2 = 62.4%; random effects model; 4 studies; n = 362 in the arthroscopic group, n = 1542 in the control group; Supplementary Fig. A). Sensitivity analysis upheld these findings (Supplementary Fig. A) and there was no evidence of publication bias (Table ). Functional improvement, measured mainly through the KSS, was marginally lower in the arthroscopic group, albeit not significantly (SMD − 0.075, 95% CI -0.186 to -0.037; p = 0.081; I 2 = 46.7%; random effects model; 7 studies; n = 1067 in the arthroscopic group, n = 4067 in the control group; Supplementary Fig. B). Subgroup analysis based on the interval between arthroscopy and TKA indicated poorer outcomes for intervals less than one year, though this was based on a single study. The stability of the results was confirmed through sensitivity analysis (Supplementary Fig. B), and no publication bias was detected (Table ). Postoperative complications Joint stiffness The analysis included six articles and seven data sets, with a total of 6,699 cases in the experimental group and 173,748 cases in the control group. A meta-analysis of the data revealed significant interstudy heterogeneity ( p = 0.001; I 2 = 73.1%), and a random effects model was employed. The results revealed no statistically significant difference between the two groups (RR 1.354, 95% CI 0.881 to 0.081; p = 0.167), indicating that knee arthroscopy did not increase the risk of stiffness after subsequent TKA. Subgroup analysis also indicated that the incidence of postoperative stiffness was not increased when a TKA was performed within one year or one year after receiving an arthroscopy (Supplementary Fig. C). Sensitivity analysis confirmed the stability of these results (Supplementary Fig. C), and Egger’s test indicated no publication bias (Table ). Periprosthetic fractures A total of four studies were included in this analysis, with 3,664 patients in the experimental group and 136,085 patients in the control group. The meta-analysis revealed significant heterogeneity among the four studies ( p < 0.001; I 2 = 81.2%), leading to the use of a random effects model. The analysis showed no significant difference between the two groups (RR -0.86, 95%CI 0.13 to 5.54; p = 0.876). These findings suggest that knee arthroscopy did not increase the risk of periprosthetic fracture after TKA. Subgroup analysis supported this finding, indicating that arthroscopy performed within one year after TKA and after one year did not increase the incidence of postoperative fracture around the prosthesis (Supplementary Fig. D). Sensitivity analysis further confirmed the stability of these results (Fig. D). VTE Two articles were included in this analysis , with a total of three groups of data. Given that the heterogeneity between the study results was not large ( p = 0.662; I 2 = 0.00%), a fixed effects model was used for the meta-analysis. The results showed that the incidence of VTE between the two groups was not significantly different (RR 1.06, 95% CI 0.83 to 1.35; p = 0.662). Likewise, the subgroup analysis showed that arthroscopy within one year and one year after KA did not increase the incidence of postoperative VTE (Supplementary Fig. E). Sensitivity analysis confirmed that this finding was relatively stable, verifying the reliability of the results (Fig. E). Aseptic loosening Eight studies , encompassing eight sets of data, were incorporated into the meta-analysis of aseptic loosening; there were 9,433 cases in the experimental group and 143,420 cases in the control group. The meta-analysis of the included data revealed considerable heterogeneity among the studies ( p < 0.001; I 2 = 75.9%); thus, a random effects model was employed. The comparison between the two groups yielded no statistically significant difference (RR 1.542, 95% CI 0.876 to 2.716; p = 0.134), suggesting that arthroscopic surgery of the knee did not increase the risk of aseptic loosening following TKA (Supplementary Fig. F). Sensitivity analysis indicated that the results were relatively stable (Supplementary Fig. F), and Egger’s test suggested the absence of publication bias (Table ). PJI A total of eight articles were included in this analysis, comprising 10 groups of data. The experimental group consisted of 12,377 cases, while the control group consisted of 178,523 cases. The meta-analysis revealed a lack of heterogeneity ( p = 0.622; I 2 = 0.0%), and a fixed effects model was used. The difference between the two groups was statistically significant (RR 1.317, 95%CI 1.165 to 1.488; p < 0.01). The results indicated that KA increased the risk of artificial joint infection after TKA (Fig. ). Subgroup analysis indicated that within one year of TKA, arthroscopy increased the risk of PJI (RR 1.314, 95% CI 1.156 to 1.493; p < 0.01). Sensitivity analysis confirmed the stability of the results (Supplementary Fig. G), and an Egger’s test suggested no publication bias (Table ). MUA A total of three articles were included in this analysis, with 9,066 cases in the experimental group and 141,370 cases in the control group. The results of the meta-analysis showed heterogeneity among the studies ( p < 0.001; I2 = 89.8%), and a random effects model was used. A statistically significant difference was found between the two groups (RR 1.761, 95%CI 1.140 to 2.719; p = 0.011). The results suggested that manipulation under anaesthesia in experimental group was increased after TKA (Fig. ). Sensitivity analysis indicated that the results were relatively stable (when excluding Sax 2022 , the combined results no longer remained statistically significant (RR 15.514, 95% CI 0.075 to 3196.22)) (Supplementary Fig. H). Four studies compared preoperative and postoperative ROM, while six others evaluated functional improvement scores. The meta-analysis showed no significant difference in ROM between patients with and without prior arthroscopy (mean difference − 0.61, 95% CI -3.48 to 2.26; I 2 = 62.4%; random effects model; 4 studies; n = 362 in the arthroscopic group, n = 1542 in the control group; Supplementary Fig. A). Sensitivity analysis upheld these findings (Supplementary Fig. A) and there was no evidence of publication bias (Table ). Functional improvement, measured mainly through the KSS, was marginally lower in the arthroscopic group, albeit not significantly (SMD − 0.075, 95% CI -0.186 to -0.037; p = 0.081; I 2 = 46.7%; random effects model; 7 studies; n = 1067 in the arthroscopic group, n = 4067 in the control group; Supplementary Fig. B). Subgroup analysis based on the interval between arthroscopy and TKA indicated poorer outcomes for intervals less than one year, though this was based on a single study. The stability of the results was confirmed through sensitivity analysis (Supplementary Fig. B), and no publication bias was detected (Table ). Joint stiffness The analysis included six articles and seven data sets, with a total of 6,699 cases in the experimental group and 173,748 cases in the control group. A meta-analysis of the data revealed significant interstudy heterogeneity ( p = 0.001; I 2 = 73.1%), and a random effects model was employed. The results revealed no statistically significant difference between the two groups (RR 1.354, 95% CI 0.881 to 0.081; p = 0.167), indicating that knee arthroscopy did not increase the risk of stiffness after subsequent TKA. Subgroup analysis also indicated that the incidence of postoperative stiffness was not increased when a TKA was performed within one year or one year after receiving an arthroscopy (Supplementary Fig. C). Sensitivity analysis confirmed the stability of these results (Supplementary Fig. C), and Egger’s test indicated no publication bias (Table ). Periprosthetic fractures A total of four studies were included in this analysis, with 3,664 patients in the experimental group and 136,085 patients in the control group. The meta-analysis revealed significant heterogeneity among the four studies ( p < 0.001; I 2 = 81.2%), leading to the use of a random effects model. The analysis showed no significant difference between the two groups (RR -0.86, 95%CI 0.13 to 5.54; p = 0.876). These findings suggest that knee arthroscopy did not increase the risk of periprosthetic fracture after TKA. Subgroup analysis supported this finding, indicating that arthroscopy performed within one year after TKA and after one year did not increase the incidence of postoperative fracture around the prosthesis (Supplementary Fig. D). Sensitivity analysis further confirmed the stability of these results (Fig. D). VTE Two articles were included in this analysis , with a total of three groups of data. Given that the heterogeneity between the study results was not large ( p = 0.662; I 2 = 0.00%), a fixed effects model was used for the meta-analysis. The results showed that the incidence of VTE between the two groups was not significantly different (RR 1.06, 95% CI 0.83 to 1.35; p = 0.662). Likewise, the subgroup analysis showed that arthroscopy within one year and one year after KA did not increase the incidence of postoperative VTE (Supplementary Fig. E). Sensitivity analysis confirmed that this finding was relatively stable, verifying the reliability of the results (Fig. E). Aseptic loosening Eight studies , encompassing eight sets of data, were incorporated into the meta-analysis of aseptic loosening; there were 9,433 cases in the experimental group and 143,420 cases in the control group. The meta-analysis of the included data revealed considerable heterogeneity among the studies ( p < 0.001; I 2 = 75.9%); thus, a random effects model was employed. The comparison between the two groups yielded no statistically significant difference (RR 1.542, 95% CI 0.876 to 2.716; p = 0.134), suggesting that arthroscopic surgery of the knee did not increase the risk of aseptic loosening following TKA (Supplementary Fig. F). Sensitivity analysis indicated that the results were relatively stable (Supplementary Fig. F), and Egger’s test suggested the absence of publication bias (Table ). PJI A total of eight articles were included in this analysis, comprising 10 groups of data. The experimental group consisted of 12,377 cases, while the control group consisted of 178,523 cases. The meta-analysis revealed a lack of heterogeneity ( p = 0.622; I 2 = 0.0%), and a fixed effects model was used. The difference between the two groups was statistically significant (RR 1.317, 95%CI 1.165 to 1.488; p < 0.01). The results indicated that KA increased the risk of artificial joint infection after TKA (Fig. ). Subgroup analysis indicated that within one year of TKA, arthroscopy increased the risk of PJI (RR 1.314, 95% CI 1.156 to 1.493; p < 0.01). Sensitivity analysis confirmed the stability of the results (Supplementary Fig. G), and an Egger’s test suggested no publication bias (Table ). MUA A total of three articles were included in this analysis, with 9,066 cases in the experimental group and 141,370 cases in the control group. The results of the meta-analysis showed heterogeneity among the studies ( p < 0.001; I2 = 89.8%), and a random effects model was used. A statistically significant difference was found between the two groups (RR 1.761, 95%CI 1.140 to 2.719; p = 0.011). The results suggested that manipulation under anaesthesia in experimental group was increased after TKA (Fig. ). Sensitivity analysis indicated that the results were relatively stable (when excluding Sax 2022 , the combined results no longer remained statistically significant (RR 15.514, 95% CI 0.075 to 3196.22)) (Supplementary Fig. H). The analysis included six articles and seven data sets, with a total of 6,699 cases in the experimental group and 173,748 cases in the control group. A meta-analysis of the data revealed significant interstudy heterogeneity ( p = 0.001; I 2 = 73.1%), and a random effects model was employed. The results revealed no statistically significant difference between the two groups (RR 1.354, 95% CI 0.881 to 0.081; p = 0.167), indicating that knee arthroscopy did not increase the risk of stiffness after subsequent TKA. Subgroup analysis also indicated that the incidence of postoperative stiffness was not increased when a TKA was performed within one year or one year after receiving an arthroscopy (Supplementary Fig. C). Sensitivity analysis confirmed the stability of these results (Supplementary Fig. C), and Egger’s test indicated no publication bias (Table ). A total of four studies were included in this analysis, with 3,664 patients in the experimental group and 136,085 patients in the control group. The meta-analysis revealed significant heterogeneity among the four studies ( p < 0.001; I 2 = 81.2%), leading to the use of a random effects model. The analysis showed no significant difference between the two groups (RR -0.86, 95%CI 0.13 to 5.54; p = 0.876). These findings suggest that knee arthroscopy did not increase the risk of periprosthetic fracture after TKA. Subgroup analysis supported this finding, indicating that arthroscopy performed within one year after TKA and after one year did not increase the incidence of postoperative fracture around the prosthesis (Supplementary Fig. D). Sensitivity analysis further confirmed the stability of these results (Fig. D). Two articles were included in this analysis , with a total of three groups of data. Given that the heterogeneity between the study results was not large ( p = 0.662; I 2 = 0.00%), a fixed effects model was used for the meta-analysis. The results showed that the incidence of VTE between the two groups was not significantly different (RR 1.06, 95% CI 0.83 to 1.35; p = 0.662). Likewise, the subgroup analysis showed that arthroscopy within one year and one year after KA did not increase the incidence of postoperative VTE (Supplementary Fig. E). Sensitivity analysis confirmed that this finding was relatively stable, verifying the reliability of the results (Fig. E). Eight studies , encompassing eight sets of data, were incorporated into the meta-analysis of aseptic loosening; there were 9,433 cases in the experimental group and 143,420 cases in the control group. The meta-analysis of the included data revealed considerable heterogeneity among the studies ( p < 0.001; I 2 = 75.9%); thus, a random effects model was employed. The comparison between the two groups yielded no statistically significant difference (RR 1.542, 95% CI 0.876 to 2.716; p = 0.134), suggesting that arthroscopic surgery of the knee did not increase the risk of aseptic loosening following TKA (Supplementary Fig. F). Sensitivity analysis indicated that the results were relatively stable (Supplementary Fig. F), and Egger’s test suggested the absence of publication bias (Table ). A total of eight articles were included in this analysis, comprising 10 groups of data. The experimental group consisted of 12,377 cases, while the control group consisted of 178,523 cases. The meta-analysis revealed a lack of heterogeneity ( p = 0.622; I 2 = 0.0%), and a fixed effects model was used. The difference between the two groups was statistically significant (RR 1.317, 95%CI 1.165 to 1.488; p < 0.01). The results indicated that KA increased the risk of artificial joint infection after TKA (Fig. ). Subgroup analysis indicated that within one year of TKA, arthroscopy increased the risk of PJI (RR 1.314, 95% CI 1.156 to 1.493; p < 0.01). Sensitivity analysis confirmed the stability of the results (Supplementary Fig. G), and an Egger’s test suggested no publication bias (Table ). A total of three articles were included in this analysis, with 9,066 cases in the experimental group and 141,370 cases in the control group. The results of the meta-analysis showed heterogeneity among the studies ( p < 0.001; I2 = 89.8%), and a random effects model was used. A statistically significant difference was found between the two groups (RR 1.761, 95%CI 1.140 to 2.719; p = 0.011). The results suggested that manipulation under anaesthesia in experimental group was increased after TKA (Fig. ). Sensitivity analysis indicated that the results were relatively stable (when excluding Sax 2022 , the combined results no longer remained statistically significant (RR 15.514, 95% CI 0.075 to 3196.22)) (Supplementary Fig. H). A total of eight studies were included in this analysis, with a total of nine sets of data. The follow-up period ranged from 2 to 8.7 ± 2.5 years. The results revealed heterogeneity between the studies ( p = 0.166; I 2 = 30.40%), and a fixed effects model was employed. The analysis revealed a statistically significant difference between the two groups (RR 1.423, 95%CI 1.280 to 1.583; p < 0.01). Specifically, KA was associated with an increased revision rate after TKA. Subgroup analyses indicated that knee arthroplasty performed within one year after arthroscopy was associated with a higher rate of revision after surgery, particularly at one year (Fig. ). Sensitivity analysis showed that the combined effect size changed significantly when Sax 2022 was removed, but the results remained statistically significant (RR 1.623, 95%CI 1.351 to 1.950; p < 0.001) (Supplementary Fig. I). This suggests that these meta-analysis results are relatively robust and not overly influenced by the number of studies. Additionally, Egger’s test suggested no publication bias (Table ). This systematic review and meta-analysis revealed the impact of prior KA on the outcomes of TKA. Although functional outcomes, as measured by the KSS and ROM, were comparable between patients with and without a history of arthroscopy, an increased risk of postoperative infection and a higher need for revision surgery were observed in the arthroscopy group. Several meta-analyses have previously examined the influence of prior KA on subsequent TKA outcomes . A recent study by Liu et al. found that performing arthroscopy before TKA significantly increased the risk of postoperative revision, reoperation, infection, and aseptic loosening. However, it is worth noting that the analysis mistakenly included a duplicate study , which could affect the accuracy and credibility of the results. Furthermore, a large-scale study was recently published , which may change the results of the aforementioned meta-analysis. The current study addresses these limitations and provides a more comprehensive and up-to-date analysis of the literature. The current meta-analysis revealed comparable postoperative ROM and functional improvement between patients with and without prior arthroscopy before TKA. The 95% CIs for these outcomes provide insight into both the statistical significance and clinical relevance of the findings. For these primary functional outcomes, the CIs were narrow and centred near zero, overlapping with the null effect (i.e., RR = 1). This not only indicates statistical non-significance but also a lack of clinically meaningful difference . It has been suggested that arthroscopic surgery may impact future TKA outcomes, with the potential mechanisms including intraarticular scarring, adhesions, cartilage damage, ligament rupture, and changes in the patellar trajectory . However, the current study did not find any clinical impact on functional performance, possibly because arthroscopy is mainly used to address localized knee lesions such as meniscal tears and loose bodies, whereas TKA is indicated for conditions such as end-stage OA. The findings of the current study suggest that, except for a marginally increased risk of joint infection, severe complications post-TKA are similar between those who have undergone prior arthroscopy and those who have not, consistent with the findings of Issa et al. and Viste et al. . No significant difference in postoperative complications was noted between those undergoing TKA within one year of arthroscopy and those who underwent it more than one year later, suggesting minimal impact of preoperative arthroscopy on post-TKA complications such as stiffness, fractures, and VTE. However, the need for postoperative general anaesthesia and traction (MUA) was almost double in the previous KA group, likely due to intra-articular adhesions from arthroscopy. Nonetheless, no significant disadvantage in postoperative motion scores was observed, potentially due to limitations in the sample size and the absence of clinically significant differences in activity levels between the groups. The current findings revealed a higher TKA revision rate in patients who had previously undergone arthroscopy. This may be attributed to arthroscopy-induced cartilage and bone damage, which can affect implant stability. The osmotic pressure of arthroscopic perfusate affects the metabolism of the knee tissues, disrupting cartilage structures and exposing subchondral bone. This, in turn, stimulates autoimmune responses, leading to osteolysis . It is important to note that the average follow-up period of the included studies ranged from 90 days to 8 years, and since TKA revisions tend to peak at 8–10 years post-surgery , the actual revision risk in the arthroscopy group could be higher. TKA revisions are more risky, traumatic, and costly compared to primary surgeries . Therefore, reducing the revision rate after TKA is crucial for optimizing the quality of joint arthroplasty. This finding has important implications for determining the necessity of arthroscopic surgery prior to TKA. Infection after joint arthroplasty is a significant complication. This meta-analysis revealed that patients who had previously undergone arthroscopy had a 32% higher risk of infection after TKA. This increased risk may be attributed to the destruction of the joint barrier and the formation of hematoma caused by arthroscopy . Additionally, the presence of scar tissue following arthroscopy can potentially weaken local immunity and elevate the risk of infection during subsequent prosthesis placement . However, it is important to interpret this relatively small increase in infection risk cautiously. The baseline incidence of peri-implant infection after primary TKA is low, at approximately 1–2% . Therefore, a 32% relative risk increase translates to a minimal absolute risk difference of only approximately 0.3–0.6%. Recent work by Werner et al. has highlighted the temporal criticality of the adverse effects of arthroscopy on subsequent joint replacement. Their large study, based on national databases, showed that the risk of complications increased only when TKA was performed within six months of an arthroscopy. However, TKA within one year after arthroscopy is currently used as a surrogate marker of departmental compliance with guidelines for the management of degenerative knees . According to Johanson’s research, the goal of a one-year conversion rate of less than 10% has been set as the benchmark for optimal care . In the current study, subgroup analysis showed that the repair rates of patients who received TKA within one year of arthroscopy and those who received TKA more than one year later were significantly higher than that of TKA patients who did not receive arthroscopy. However, the postoperative joint infection rate was significantly higher in patients who underwent TKA within one year of arthroscopy as compared to patients who did not undergo arthroscopy. Therefore, we recommend caution when performing TKA within one year of arthroscopy, and surgeons should inform patients of the possibility of an increased risk of postoperative complications. Further studies with longer follow-ups are required to verify these findings and explore the optimal time interval between arthroscopy and TKA. The meta-analysis conducted in this study has several notable strengths. Firstly, a systematic synthesis of evidence from over 100,000 knee arthroplasty surgeries was performed. This provided enhanced precision of estimates regarding the relationship between prior arthroscopy and outcomes. This approach also allowed for the examination of potential complications that are often underreported in individual cohorts. Additionally, the study employed detailed subgroup and sensitivity analyses to investigate the impact of the interval length between arthroscopy and TKA on the results, thereby enhancing the robustness of the findings. Furthermore, the assessment of publication bias using Egger’s test indicated no significant bias. Finally, this study incorporated several recent relevant studies. Despite its strengths, this study has several limitations that should be noted. First, all included studies were retrospective, which may introduce selection bias, confounding variables, and group differences. Most studies did not adequately control for factors such as age, obesity, and systemic diseases. To address this, original data were collected for this study, where possible, and adjusted effect sizes were used in the meta-analysis. Second, while no significant differences were found in KSS and ROM, variability in follow-up times, patient demographics, and scoring methods may limit the generalizability of the current findings. Additionally, due to insufficient data, the effects of different arthroscopic surgeries (e.g., meniscectomy, debridement, microfracture) on TKA outcomes could not be distinguished. Heterogeneity in outcomes such as postoperative stiffness and periprosthetic fractures may relate to differences in surgical techniques, comorbidities, and implant types. Moreover, despite the independent literature search and data extraction conducted by both authors, potential discrepancies in interpretation could impact the data synthesis. While consensus resolution may mitigate this effect, the current study did not employ a formal assessment method to evaluate inter-rater agreement, which is a limitation of the methodology. Lastly, short follow-up periods and limited assessment of long-term complications reduce the robustness of the current conclusions. Future prospective studies are needed to provide more reliable evidence. This study investigated the impact of prior KA on the outcomes of subsequent TKA. The findings indicate that although functional outcomes are similar between these two groups, those with prior arthroscopy have higher risks of deep infection and revision. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5
Prokaryotic communities profiling of Indonesian hot springs using long-read Oxford Nanopore sequencing
9b92dc75-8890-41af-be34-d072196c6ef4
11446079
Microbiology[mh]
Indonesia lies at the intersection of the Ring of Fire and the Alpide belt, which fuels volcanic activity and geothermal heat, resulting in its abundance of hot springs . Bali- an island in Indonesia, is well-known for its unique and diversified flora-fauna and hot springs. Hot springs in Indonesia are rich reservoirs of microbial life. Various researchers have investigated the hot springs for the discovery of novel thermophilic bacteria. However, the data about the exploration of thermophilic bacteria utilizing culturing and molecular techniques from the Indonesian hot springs is very low . Also, the metagenomic data aiming at microbial diversity identification is unavailable. Microbial profiling by the 16 S rRNA amplicon sequencing and shotgun metagenomic sequencing provides a comprehensive picture of the hot spring microbial community and leads to discovering the many novel and rare species, their metabolites, and biocatalysts . Due to less cultiviability of the thermophiles, 16 S rRNA amplicon-based metagenomic analysis is the best way to determine the diversity of thermophilic bacteria living in hot springs . It is suitable for the isolation of microbes having potential for the production of novel metabolites. Long-read 16 S rRNA gene amplicon sequencing using Oxford Nanopore Technologies (ONT) is better than other NGS platforms . The long-reads sequencing using NGS has transformed microbiome taxonomic classification and profiling to understand microbial life and its potential for groundbreaking discoveries . So, the 16 S rRNA amplicon sequencing data provides a window into the unseen world of microbes and offers invaluable insights into the composition, distribution, and potential of microbial communities. The present study explored the microbial diversity associated with three hot springs located in Bali, Indonesia. Data obtained from the present study may act as the benchmark for researchers aiming at mapping of microbial diversity associated with hot springs. It will also provide comprehensive view of the microbial community. The data is crucial for hot springs’ health. Sample collection The sterile thermal bottles were used for the collection of water samples from three hot springs. Metadata like temperatures, pH, color and turbidity of water were recorded during sampling. Water samples were collected multiple times during the day in July and September 2023. Samples were brought to the laboratory on the same day. The samples were then pooled, and100 mL of each sample were filtered using a membrane filter, and retarded biomass on the filters was subjected to DNA isolation. Metagenomic DNA extraction The BioLit Genomic DNA Extraction Mini Kit (SRL, Mumbai, India) was used to isolate DNA from water samples from three hot springs. The quantity and quality of the DNA were determined using 0.8% agarose gel followed by a NanoDrop spectrophotometer and a Qubit fluorometer. After QC of isolated DNA, 50 µl of each DNA sample was used for the sequencing. 16 S amplicon sequencing The 16 S rRNA gene sequence libraries were created using the 16 S Rapid Amplicon Barcoding Kit (ONT, Oxford, UK) by following the manufacturer’s instructions. LongAmp ® Taq 2X master mix (New England Biolabs, Ipswich, USA) and the barcoded nanopore sequence primers 27 F 5′-AGA GTT TGA TCM TGG CTC AG-3′ and 1492R: 5′-GGT TAC CTTGTT ACG ACT T-3′ were used to amplify the full-length (1600 bp) 16 S rRNA gene. Following the quantification of 16 S rRNA gene amplicons, equal amounts of amplicons per sample were pooled, and the library was processed according to the manufacturer’s instructions. After incubating the library with Library Loading Beads (ONT, Oxford, UK), the mixture was loaded into the GridION flow cell (version R.9.4, ONT, Oxford, UK). The GridION nanopore sequencer was used for 14 h of sequencing at PT. Genetika Science Indonesia ( https://ptgenetika.com ). Nanopore sequencing was operated by MinKNOW software version 23.04.5. Basecalling was performed using Guppy version 6.5.7 with a high-accuracy model . Data processing The output data (FASTQ files) generated more than 93,000 amplified sequences in each sample, subjected to QC using NanoPlot 1.40.0. Quality filtering was done using NanoFit 2.8.0. to obtain 0.34GB data in each sample with 1600 bp average sequence length in all three samples. The average sequence quality was 30 (Phred Score). Filtered reads were classified using the Centrifuge classifier . The Bacteria and Archaea index was built using the NCBI 16 S RefSeq database . Data is publicly available at EMBL-EBI ENA under the study ID PRJEB70710 (Table ) . The project is ongoing and no other data and analysis were published earlier. The sterile thermal bottles were used for the collection of water samples from three hot springs. Metadata like temperatures, pH, color and turbidity of water were recorded during sampling. Water samples were collected multiple times during the day in July and September 2023. Samples were brought to the laboratory on the same day. The samples were then pooled, and100 mL of each sample were filtered using a membrane filter, and retarded biomass on the filters was subjected to DNA isolation. The BioLit Genomic DNA Extraction Mini Kit (SRL, Mumbai, India) was used to isolate DNA from water samples from three hot springs. The quantity and quality of the DNA were determined using 0.8% agarose gel followed by a NanoDrop spectrophotometer and a Qubit fluorometer. After QC of isolated DNA, 50 µl of each DNA sample was used for the sequencing. The 16 S rRNA gene sequence libraries were created using the 16 S Rapid Amplicon Barcoding Kit (ONT, Oxford, UK) by following the manufacturer’s instructions. LongAmp ® Taq 2X master mix (New England Biolabs, Ipswich, USA) and the barcoded nanopore sequence primers 27 F 5′-AGA GTT TGA TCM TGG CTC AG-3′ and 1492R: 5′-GGT TAC CTTGTT ACG ACT T-3′ were used to amplify the full-length (1600 bp) 16 S rRNA gene. Following the quantification of 16 S rRNA gene amplicons, equal amounts of amplicons per sample were pooled, and the library was processed according to the manufacturer’s instructions. After incubating the library with Library Loading Beads (ONT, Oxford, UK), the mixture was loaded into the GridION flow cell (version R.9.4, ONT, Oxford, UK). The GridION nanopore sequencer was used for 14 h of sequencing at PT. Genetika Science Indonesia ( https://ptgenetika.com ). Nanopore sequencing was operated by MinKNOW software version 23.04.5. Basecalling was performed using Guppy version 6.5.7 with a high-accuracy model . The output data (FASTQ files) generated more than 93,000 amplified sequences in each sample, subjected to QC using NanoPlot 1.40.0. Quality filtering was done using NanoFit 2.8.0. to obtain 0.34GB data in each sample with 1600 bp average sequence length in all three samples. The average sequence quality was 30 (Phred Score). Filtered reads were classified using the Centrifuge classifier . The Bacteria and Archaea index was built using the NCBI 16 S RefSeq database . Data is publicly available at EMBL-EBI ENA under the study ID PRJEB70710 (Table ) . The project is ongoing and no other data and analysis were published earlier. While 16 S rRNA sequencing successfully assigned taxonomy to the hot spring microbiome, it could not provide functional analysis of the microbes. 16 S rRNA sequencing doesn’t detect fungi, viruses, or other non-bacterial/archaeal organisms in the sample. Secondly, our sampling strategy involved collecting water samples multiple times throughout a single day in July and September 2023. The DNA was then isolated from pooled samples to capture a broader range of species. However, this approach only provides a snapshot and may not represent the seasonal variations within the hot spring’s microbial community. To achieve a more comprehensive understanding of microbial dynamics, it would be ideal to collect water samples from all hot springs throughout the year.
Genomic Characterization of
2e62fae8-ce79-4174-a13e-55f7c4995a43
11593849
Microbiology[mh]
Lactic acid bacteria (LAB) are a diverse group of bacteria producing lactic acid as the main end-product of carbohydrate fermentation and are ubiquitously distributed in nature . They include several genera, such as the emended genus Lactobacillus , Lactiplantibacillus , Lacticaseibacillus , Limosilactobacillus , Streptococcus , Pediococcus , Leuconostoc , and Weissella . They are being continuously researched due to probiotic attributes and potential health benefits. Probiotics are live microorganisms that, when administered in adequate amounts, confer a health benefit to the host . Probiotic bacteria have positive impacts on the immune system and on the composition and functioning of the gut microbiota. Moreover, the biosynthesis of vitamins has been assumed to be among the causal relationships of the health benefits of probiotics . LAB as probiotics are used to prevent and/or treat gastrointestinal disorders, and a wealth of evidence emerging from studies also indicates their anti-cancer activity . Lactiplantibacillus plantarum is a gram-positive, non-motile, non-spore-forming, microaerophilic, and mesophilic bacterium that belongs to the group LAB . It is one of the most adaptable LAB species, as evidenced by its ability to inhabit a wide range of niches such as the gastrointestinal, vaginal and urogenital tracts, meat, fish, fermented vegetables, wine, and dairy products . It is widely used in industrial fermentation and as probiotics since it is “Generally Recognized as Safe” (GRAS) and has a Qualified Presumption of Safety (QPS) status . Throughout the last century, documented health-promoting and functional properties of L. plantarum strains have generated attention for their applications . Beneficial properties attributed to L. plantarum are diverse, varying from its use in the fermentation of dairy products such as cheese, kefir, sauerkraut, fermented meat products, fermented vegetables, and beverages to its cholesterol-lowering activity, enhancement of the intestinal barrier, immunomodulation, and prevention of bacterial and viral infections . Its antibacterial properties are also interesting for food safety, as in the biopreservation technology . The European Food Safety Authority (EFSA) requires unequivocal taxonomic identification at the strain level in whole genome sequence (WGS) analysis of microorganisms intentionally used in the food chain . The WGS analysis also provides a better understanding of the relation between strains’ genotypic and phenotypic profiles and, thus, is required to better understand strain features . Although LAB are GRAS, there are rare cases of the emergence of some infections and antibiotic resistance . For this reason, data obtained from WGS analysis are required for the unequivocal taxonomic identification of the strains. Moreover, the analysis can provide valuable information regarding the potential functional traits, virulence factors, resistance to antimicrobials, and the production of toxic metabolites . In this study, we aimed to taxonomically identify and explore the genome of the three strains (54B, 54C, and 55A) isolated in our previous study using in silico WGS analysis as per the EFSA recommendations . 2.1. Bacterial Strains, Growth Conditions, and Genomic DNA Extraction Three isolates ( L. plantarum 54B, 54C, 55A) originally isolated from Ethiopian traditional cottage cheese were selected based mainly on their performance in antimicrobial and cell culture assays. The source, Ethiopian traditional cottage cheese, was produced by heating a spontaneously fermented (18–24 h) and defatted cow milk at the household level, and the strains were characterized in our previous study . Strains were revived from −80 °C glycerol stocks on de Man, Rogosa, and Sharpe (MRS) (a selective medium used to enrich LAB ) plates and incubated for 48 h at 37 °C. Single colonies were cultivated in MRS broth for 24 h at 37 °C. Total DNA content was extracted using a modified protocol based on Alimolaei and Golchin . Briefly, 1.5 mL of overnight culture was transferred twice to two sterile Eppendorf tubes, and 1.5 µL of ampicillin (100 mg/mL) was added and incubated at 37 °C for 1 h. The culture was then spun down at 12,000× g for 3 min to remove the supernatant, and the pellet was washed 3× with 1 mL of NaCl-EDTA. The pellets present in both Eppendorf tubes were pooled into one Eppendorf tube. The cell pellets were resuspended in 100 µL of NaCl-EDTA and 100 µL of lysozyme (10 mg/mL), and 1 µL RNase (20 mg/mL) was added to the tube and incubated at 37 °C with periodic shaking for 1 h. Following this, 229 µL of NaCl-EDTA, 50 µL of 10% SDS, and 20 of µL Proteinase K were added, vortexed, and incubated at 55 °C for 1 h. Then, 200 µL of cold protein precipitation solution was added and vortexed at maximum speed for 20 sec. The mix was then centrifuged at 12.000× g , at 4 °C, for 3 min after being placed on ice for 5 min. The supernatant was transferred to a clean 1.5 mL tube, centrifuged again (12,000× g , 4 °C, 3 min), and the supernatant was transferred to a clean 1.5 mL tube. The DNA was precipitated with 600 µL ice-cold isopropanol and centrifuged at 12,000× g , at 4 °C, for 3 min to discard the supernatant. The pellet was then washed with 600 µL fresh 70% ethanol, the supernatant was discarded, and the tube was left to air-dry. Finally, the pellet was dissolved in 50 µL H 2 O, incubated at 55 °C for 5 min, and stored at −20 °C. DNA samples in the range of 25–50 ng/µL (measured with Qubit), with a minimum volume of 20 µL, were sent for WGS. 2.2. Genome Sequencing, Assembly and Annotation High molecular weight genomic DNA of the isolates was then further processed for sequencing using Nextera library prep and MiSeq sequencing (Illumina) at the lab of Medical Microbiology, University of Antwerp. After sequencing, the raw reads were demultiplexed, and the barcode sequences were removed. The raw reads were also subjected to adapter cutting and quality filtering using the Fastq Utilities Service (with Trim, Paired Filter, FastQC, and Align pipeline options) of the bacterial and viral-bioinformatics resource center (bv-brc).org using the default parameters. The reads were de novo assembled into contigs using SPAdes (3.12.0) with default parameters in the Shovill (1.0.0) pipeline. Quality and completeness were assessed using CheckM (completeness > 94% required) . Annotation was performed with Prokka (1.12) . 2.3. Bioinformatic Tools for Comparative Genomics Studies Comparative genomic analysis was performed using bv-brc.org . The bv-brc Taxonomic Classification service (accessed on 8 September 2024) was used to identify species of the strains using WGS reads following the pipeline established by Lu et al. that utilizes the Kraken 2 taxonomic classification system . The pipeline uses exact-match database queries of k-mers. Sequences are classified by querying the database for each k-mer in a sequence and then using the resulting set of lowest common ancestor (LCA) taxa to determine an appropriate label for the sequence. A comparative phylogenetic tree was constructed in bv-brc using the “Bacterial Genome Tree” tool (accessed on 24 August 2024), which generates a phylogenetic tree using the codon tree method for the three novel L. plantarum strains (54B, 54C, and 55A) together with 23 previously published L. plantarum isolates from different spectrum of niches. The Codon Tree pipeline generates bacterial phylogenetic trees by using the amino acid and nucleotide sequences from a defined number of the bv-brc global Protein Families (PGFams), which are picked randomly to build an alignment and then generate a tree based on the differences within those selected sequences. The support values in the system are generated using 100 rounds of the “Rapid” bootstrapping option of RAxML (8.2.11). The ‘Comparative Systems Service’ of the bv-brc (accessed on 1 May 2024) was also utilized to perform pathways analyses, to compare protein families among the genomes included in the analysis, and to mine the presence or absence of the ‘probiotic marker genes’ in the genomes studied. The Pathway Comparison Tool is based on the Rapid Annotation using Subsystem Technology tool kit (RASTtk) annotations. It allows the identification of the presence or absence of metabolic pathways based on taxonomy, pathway ID, EC number, pathway name, and/or specific annotation type. The Average Nucleotide Identity (ANI) was calculated by using FASTANI v1.33. The ‘Variation Analysis’ service of the bv-brc was used to measure genetic variations of single nucleotide polymorphisms (SNPs) between the isolates L . plantarum 54B and 54C. 2.4. Prediction of Putative Biosynthetic Gene Clusters of Bioactive Compounds To predict genes coding for different types of biosynthetic pathways involved in the production of secondary metabolites (SMs), antiSMASH 7.0 (Antibiotics and Secondary Metabolite Analysis Shell) was utilized (accessed on 10 August 2023) . More in-depth analyses were performed in antiSMASH for biosynthetic gene clusters (BGCs) encoding non-ribosomal peptide synthetases (NRPSs), polyketide synthases (PKSs), ribosomally synthesized and post-translationally modified peptides (RiPPs), and RiPP-like molecules. The annotated genome FASTA file of the isolates was used as the input file, and default antiSMASH features were assumed during the analysis. Riboflavin metabolism pathway encoding genes were predicted by utilizing the bv-brc’s ‘Comparative Systems’ service. A bacteriocin gene detection software, BAGEL Version 5, was also used for the generation of data on bacteriocin-encoding genes . 2.5. Carbohydrate–Active Enzyme Analysis CAZymes of the isolates were searched against the CAZy database ( http://www.cazy.org/ , accessed on 20 October 2021). The database mainly included glycoside hydrolases (GHs), glycosyltransferases (GTs), carbohydrate esterases (CEs), carbohydrate-binding enzymes (CBMs), auxiliary activity (AAs), and polysaccharide lyases (PLs) . 2.6. Virulome and Resistome Predictions The genomes were assessed for safety using several tools recommended in the EFSA guidance document . ABRIcate v1.0.0 ( https://github.com/tseemann/abricate ; accessed on 20 October 2021) and ResFinder ( https://cge.cbs.dtu.dk/services/ResFinder-4.0/ ; accessed on 20 October 2021) were employed to identify the resistome in the genomes of the isolates with their default parameters. ABRIcate and VFDB v5 (virulence factor database, http://www.mgc.ac.cn/VFs/main.htm , accessed on 20 October 2021) were also employed to predict putative virulence factors with their default parameters . 2.7. Genome Sequences Data Accession Number The raw sequences for L. plantarum strains (54B, 54C, and 55A) were submitted to the Sequence Read Archive of the National Center for Biotechnology Information with accession number PRJNA1175724 and can be accessed at https://www.ncbi.nlm.nih.gov/sra/PRJNA1175724 , accessed on 20 October 2021. The sample accession numbers are as follows: SAMN44370248 for L. plantarum 54B, SAMN44370249 for L. plantarum 54C, and SAMN44370250 for L. plantarum 55A. Three isolates ( L. plantarum 54B, 54C, 55A) originally isolated from Ethiopian traditional cottage cheese were selected based mainly on their performance in antimicrobial and cell culture assays. The source, Ethiopian traditional cottage cheese, was produced by heating a spontaneously fermented (18–24 h) and defatted cow milk at the household level, and the strains were characterized in our previous study . Strains were revived from −80 °C glycerol stocks on de Man, Rogosa, and Sharpe (MRS) (a selective medium used to enrich LAB ) plates and incubated for 48 h at 37 °C. Single colonies were cultivated in MRS broth for 24 h at 37 °C. Total DNA content was extracted using a modified protocol based on Alimolaei and Golchin . Briefly, 1.5 mL of overnight culture was transferred twice to two sterile Eppendorf tubes, and 1.5 µL of ampicillin (100 mg/mL) was added and incubated at 37 °C for 1 h. The culture was then spun down at 12,000× g for 3 min to remove the supernatant, and the pellet was washed 3× with 1 mL of NaCl-EDTA. The pellets present in both Eppendorf tubes were pooled into one Eppendorf tube. The cell pellets were resuspended in 100 µL of NaCl-EDTA and 100 µL of lysozyme (10 mg/mL), and 1 µL RNase (20 mg/mL) was added to the tube and incubated at 37 °C with periodic shaking for 1 h. Following this, 229 µL of NaCl-EDTA, 50 µL of 10% SDS, and 20 of µL Proteinase K were added, vortexed, and incubated at 55 °C for 1 h. Then, 200 µL of cold protein precipitation solution was added and vortexed at maximum speed for 20 sec. The mix was then centrifuged at 12.000× g , at 4 °C, for 3 min after being placed on ice for 5 min. The supernatant was transferred to a clean 1.5 mL tube, centrifuged again (12,000× g , 4 °C, 3 min), and the supernatant was transferred to a clean 1.5 mL tube. The DNA was precipitated with 600 µL ice-cold isopropanol and centrifuged at 12,000× g , at 4 °C, for 3 min to discard the supernatant. The pellet was then washed with 600 µL fresh 70% ethanol, the supernatant was discarded, and the tube was left to air-dry. Finally, the pellet was dissolved in 50 µL H 2 O, incubated at 55 °C for 5 min, and stored at −20 °C. DNA samples in the range of 25–50 ng/µL (measured with Qubit), with a minimum volume of 20 µL, were sent for WGS. High molecular weight genomic DNA of the isolates was then further processed for sequencing using Nextera library prep and MiSeq sequencing (Illumina) at the lab of Medical Microbiology, University of Antwerp. After sequencing, the raw reads were demultiplexed, and the barcode sequences were removed. The raw reads were also subjected to adapter cutting and quality filtering using the Fastq Utilities Service (with Trim, Paired Filter, FastQC, and Align pipeline options) of the bacterial and viral-bioinformatics resource center (bv-brc).org using the default parameters. The reads were de novo assembled into contigs using SPAdes (3.12.0) with default parameters in the Shovill (1.0.0) pipeline. Quality and completeness were assessed using CheckM (completeness > 94% required) . Annotation was performed with Prokka (1.12) . Comparative genomic analysis was performed using bv-brc.org . The bv-brc Taxonomic Classification service (accessed on 8 September 2024) was used to identify species of the strains using WGS reads following the pipeline established by Lu et al. that utilizes the Kraken 2 taxonomic classification system . The pipeline uses exact-match database queries of k-mers. Sequences are classified by querying the database for each k-mer in a sequence and then using the resulting set of lowest common ancestor (LCA) taxa to determine an appropriate label for the sequence. A comparative phylogenetic tree was constructed in bv-brc using the “Bacterial Genome Tree” tool (accessed on 24 August 2024), which generates a phylogenetic tree using the codon tree method for the three novel L. plantarum strains (54B, 54C, and 55A) together with 23 previously published L. plantarum isolates from different spectrum of niches. The Codon Tree pipeline generates bacterial phylogenetic trees by using the amino acid and nucleotide sequences from a defined number of the bv-brc global Protein Families (PGFams), which are picked randomly to build an alignment and then generate a tree based on the differences within those selected sequences. The support values in the system are generated using 100 rounds of the “Rapid” bootstrapping option of RAxML (8.2.11). The ‘Comparative Systems Service’ of the bv-brc (accessed on 1 May 2024) was also utilized to perform pathways analyses, to compare protein families among the genomes included in the analysis, and to mine the presence or absence of the ‘probiotic marker genes’ in the genomes studied. The Pathway Comparison Tool is based on the Rapid Annotation using Subsystem Technology tool kit (RASTtk) annotations. It allows the identification of the presence or absence of metabolic pathways based on taxonomy, pathway ID, EC number, pathway name, and/or specific annotation type. The Average Nucleotide Identity (ANI) was calculated by using FASTANI v1.33. The ‘Variation Analysis’ service of the bv-brc was used to measure genetic variations of single nucleotide polymorphisms (SNPs) between the isolates L . plantarum 54B and 54C. To predict genes coding for different types of biosynthetic pathways involved in the production of secondary metabolites (SMs), antiSMASH 7.0 (Antibiotics and Secondary Metabolite Analysis Shell) was utilized (accessed on 10 August 2023) . More in-depth analyses were performed in antiSMASH for biosynthetic gene clusters (BGCs) encoding non-ribosomal peptide synthetases (NRPSs), polyketide synthases (PKSs), ribosomally synthesized and post-translationally modified peptides (RiPPs), and RiPP-like molecules. The annotated genome FASTA file of the isolates was used as the input file, and default antiSMASH features were assumed during the analysis. Riboflavin metabolism pathway encoding genes were predicted by utilizing the bv-brc’s ‘Comparative Systems’ service. A bacteriocin gene detection software, BAGEL Version 5, was also used for the generation of data on bacteriocin-encoding genes . CAZymes of the isolates were searched against the CAZy database ( http://www.cazy.org/ , accessed on 20 October 2021). The database mainly included glycoside hydrolases (GHs), glycosyltransferases (GTs), carbohydrate esterases (CEs), carbohydrate-binding enzymes (CBMs), auxiliary activity (AAs), and polysaccharide lyases (PLs) . The genomes were assessed for safety using several tools recommended in the EFSA guidance document . ABRIcate v1.0.0 ( https://github.com/tseemann/abricate ; accessed on 20 October 2021) and ResFinder ( https://cge.cbs.dtu.dk/services/ResFinder-4.0/ ; accessed on 20 October 2021) were employed to identify the resistome in the genomes of the isolates with their default parameters. ABRIcate and VFDB v5 (virulence factor database, http://www.mgc.ac.cn/VFs/main.htm , accessed on 20 October 2021) were also employed to predict putative virulence factors with their default parameters . The raw sequences for L. plantarum strains (54B, 54C, and 55A) were submitted to the Sequence Read Archive of the National Center for Biotechnology Information with accession number PRJNA1175724 and can be accessed at https://www.ncbi.nlm.nih.gov/sra/PRJNA1175724 , accessed on 20 October 2021. The sample accession numbers are as follows: SAMN44370248 for L. plantarum 54B, SAMN44370249 for L. plantarum 54C, and SAMN44370250 for L. plantarum 55A. 3.1. General Genomic Characteristics of the Strains The chromosomal properties, quality control statistics, and identification to the species level of the three L. plantarum isolates (54B, 54C, and 55A) sequenced in this study are summarized in . Assembly of the raw reads generated bacterial chromosomes, each with a size similar to that previously reported for sequenced L. plantarum isolates (range of 3–3.6 Mbp) . The two isolates (54B and 54C) possessed a genome length of 3.39 and 3.37 Mbp, respectively, while the isolate 55A possessed a genome length of 3.29 Mbp. The two isolates (54B and 54C) also possessed approximately the same coding sequence (CDS) (3259 and 3230, respectively) and the same GC content (44.3%), although isolate 55A contained lower CDS (3108) and relatively higher GC content (44.5%). However, the number of tRNA, rRNA, and tmRNA genes was found to be the same among the isolates . The bv-brc taxonomic classification service that used the WGS reads to identify species of the strains showed that all three isolates belong to the L. plantarum species. Here, the completeness percentage was found to be 99.07% for all the genomes sequenced. 3.2. Comparative Genomic Analysis The genomic diversity of the three novel L. plantarum strains, together with 23 previously published L. plantarum isolates from different spectra of niches, was investigated by constructing comparative phylogenetic trees to reveal the evolutionary relationship between the three L. plantarum strains (54B, 54C, and 55A) with other L. plantarum strains . The phylogenetic tree revealed that the strains 54B and 54C showed a high degree of similarity with each other . The strain 55A formed a different cluster than the 54B and 54C and displayed a high degree of similarity with the strain L. plantarum LP3, which was originally isolated from vegetables . Comparative analysis of these 26 L. plantarum genomes (23 previously published and the 3 from the present study) was performed using the ‘Comparative systems service’ of bv-brc.org. The analysis resulted in a total of 5137 protein families present across 26 strains, of which 1780 (34.65%) constitute the core genome and are present in at least one copy in all examined strains, whereas 3357 (65.35%) constituted families containing accessory gene functions only present in some strains. The comparative genomic analysis was also performed among the three genomes and the model probiotic L . plantarum WCFS1 to compare protein families among the four genomes. The analysis returned a total of 3080 protein families present across the four strains, of which 2013 (65.35%) constitute the core genome and are present in at least one copy across the examined strains, whereas 1067 (34.65%) constituted families containing accessory gene functions only present in some strains, indicating the three genomes share more genes with the model probiotic. The ANI value for our three isolates among themselves and between them and L. plantarum WCFS1 was also calculated. The ANI value for the L. plantarum 54B vs. 54C was found to be 99.9941%, indicating that they are the most related strains. The analysis of genetic variations of SNPs between closely related isolates L. plantarum 54B and 54C indicated that there are 111 single nucleotide variations (SNVs) between the genomes. 3.3. Analysis of ‘Probiotic Marker Genes’ Studies have suggested a set of genes associated with resistance to stress, active metabolism in the host, adhesion, and immunomodulation as genes with probiotic properties . Based on such findings, Carpi et al. generated an updated list of ‘probiotic marker genes’, including genes putatively accountable for stress resistance (osmotic, acid, oxidative, temperature), adhesion capacity, intestinal persistence and bile salt hydrolase activity. In total, 75 probiotic marker genes have been reported, of which about 70% correspond to genes located in the core/soft core genome , while 12 genes are located in the shell/cloud genome. Here, we analyzed the presence or absence of these probiotic marker genes in the genomes of the four strains evaluated in this work ( L. plantarum 54B, 54C, 55A, and WCFS1). As expected, most of the core genome ‘probiotic marker genes’ were found in L. plantarum 54B, 54C, 55A, and WCFS1 . The analysis also revealed the presence of the probiotic marker genes of the cloud genome, including genes for osmotic stress, bile and acid resistance, and gut persistence in the genomes studied . The genes bshA (bile salt hydrolase), gbuB (Glycine betaine/carnitine transport permease protein GbuB), gla2 (Glycerol facilitator-aquaporin gla), and xylA (xylose isomerase) were not detected in any of the four genomes. 3.4. Prediction of Secondary Metabolites and Bioactive Products Bacteriocins represent a significant class of antimicrobial peptides synthesized by LAB. In this study, the antiSMASH system predicted four fundamental areas that produce bacteriocins and secondary metabolites in the genome of L. plantarum 55A; region 1.1 (RiPP: cyclic-lactone autoinducer; location: 67,119–87,824 nt; total: 20,706 nt); region 2.1 (RiPP: RiPP-like; location: 104,505–116,655 nt; total: 12,151 nt); region 17.1 (PKS: T3PKS; location: 1–28,896 nt; total: 28,896 nt) and region 25.1 (terpene: terpene; location: 2330–23,211 nt; total: 20,882 nt) . An application of the bacteriocin gene detection software BAGEL5 also revealed the presence of two areas of interest for sactipeptides and plantaricins in the genome of L. plantarum 55A. The genes encoding these sactipeptides are located at positions from 18,188 bp to 38,188 bp in the genome of L. plantarum 55A . The BAGEL5 analysis of the L. plantarum 55A also demonstrated the presence of several genes encoding bacteriocin-related proteins associated with the production and immunity to these compounds . There are six bacteriocin-like structures encoding genes, PlnA , PlnE , PlnF , PlnJ , PlnK , and PlnN, in the genome of L. plantarum 55A. These genes are located at positions from 101,924 bp to 131,362 bp. The antiSMASH system also predicted three fundamental areas that produce bacteriocins and secondary metabolites in the genome of L. plantarum 54B ( . The genome of L. plantarum 54C was also observed to have exactly the same bacteriocins and secondary metabolites profile as L. plantarum 54B; the minor differences included the location of the PKS region and its total nucleotide . The BAGEL5 analysis also showed the presence of an identical area of interest for sactipeptides in the genomes of L. plantarum 54B and 54C, with its location from 186,914 bp to 206,914 bp . Genome analysis with the KEGG (Kyoto Encyclopedia of Genes and Genomes) database revealed that all of our L. plantarum genomes harbor all genes required for riboflavin production, which was confirmed in the gene list annotated in the comparative systems service pathway analysis tool . The genome of L. plantarum 54C was shown to have the same genes involved in riboflavin production as L. plantarum 54B; the only difference is the location of Alkaline phosphodiesterase I (EC 3.1.4.1)/Nucleotide pyrophosphatase (EC 3.6.1.9) gene in which it starts on bp 34,650 for 54B and on 95,503 bp for 54C . The analysis also revealed the presence of identical enzymes of riboflavin production in the genome of L. plantarum 55A with those of L . plantarum 54B and 54C, the differences being the position of genes and duplication of 3,4-dihydroxy-2-butanone 4-phosphate synthase (EC 4.1.99.12)/GTP cyclohydrolase II (EC 3.5.4.25) gene in the genome of L. plantarum 55A . Genome analysis with the KEGG database also showed that strains L . plantarum 54C and 55A have all enzymes of the Folate biosynthesis pathway. Whereas, strain L . plantarum 54B lacks the enzyme E.C. 2.7.6.3 (2-amino-4-hydroxy-6-hydroxymethyl dihydropteridine diphosphokinase). Here, we report that the bv-brc.org analyses revealed the three strains and the model probiotic L. plantarum wcfs1 harbored genes coding for the protein tyrosine kinase and phosphoprotein phosphatase, which are associated with T cell receptor signaling pathways. 3.5. Carbohydrate-Active Enzyme The analysis of CAZymes revealed that the L. plantarum 54B and 54C genomes each contained 91 genes in the five CAZymes gene families: 36 GT, 41 GH, 2 AA, 3 CBMs, and 9 carbohydrate CE genes. At the same time, CAZymes analysis on the genome of L. plantarum 55A revealed that it contained 90 genes in the five CAZymes gene families: 31 GT, 47 GH, 2 AA, 2 CBM, and 8 CE genes. Previous genomic analysis found that L. plantarum has five different families of enzymes involved in carbohydrate metabolism: GH, GT, CE, CBM and AA enzymes . We found that the most abundant CAZymes genes in the L. plantarum strain genomes belonged to the GH family, followed by the GT and CE families. The most abundant GT families in L. plantarum strains were GT2 and GT4, whereas the most abundant GH families were found to be GH1, GH13, and GH25. 3.6. Prediction of Antibiotic Resistance Genes and Virulence Factors Although L. plantarum is a species with QPS status, with the aim of using the strains under study in food applications, their genomes were evaluated to cover all the safety concerns as recommended by EFSA Guidance for the characterization of microorganisms used as food additives, in animal feed, and as producing organisms . Here, in our analysis, both ABRIcate and ResFinder revealed that the genomes of the strains under study harbored no antibiotic-resistant genes. ABRIcate and VFDB analyses also showed that the strains under study harbored no putative virulence factors. These findings suggest the strains’ potential safety for food and other applications. The chromosomal properties, quality control statistics, and identification to the species level of the three L. plantarum isolates (54B, 54C, and 55A) sequenced in this study are summarized in . Assembly of the raw reads generated bacterial chromosomes, each with a size similar to that previously reported for sequenced L. plantarum isolates (range of 3–3.6 Mbp) . The two isolates (54B and 54C) possessed a genome length of 3.39 and 3.37 Mbp, respectively, while the isolate 55A possessed a genome length of 3.29 Mbp. The two isolates (54B and 54C) also possessed approximately the same coding sequence (CDS) (3259 and 3230, respectively) and the same GC content (44.3%), although isolate 55A contained lower CDS (3108) and relatively higher GC content (44.5%). However, the number of tRNA, rRNA, and tmRNA genes was found to be the same among the isolates . The bv-brc taxonomic classification service that used the WGS reads to identify species of the strains showed that all three isolates belong to the L. plantarum species. Here, the completeness percentage was found to be 99.07% for all the genomes sequenced. The genomic diversity of the three novel L. plantarum strains, together with 23 previously published L. plantarum isolates from different spectra of niches, was investigated by constructing comparative phylogenetic trees to reveal the evolutionary relationship between the three L. plantarum strains (54B, 54C, and 55A) with other L. plantarum strains . The phylogenetic tree revealed that the strains 54B and 54C showed a high degree of similarity with each other . The strain 55A formed a different cluster than the 54B and 54C and displayed a high degree of similarity with the strain L. plantarum LP3, which was originally isolated from vegetables . Comparative analysis of these 26 L. plantarum genomes (23 previously published and the 3 from the present study) was performed using the ‘Comparative systems service’ of bv-brc.org. The analysis resulted in a total of 5137 protein families present across 26 strains, of which 1780 (34.65%) constitute the core genome and are present in at least one copy in all examined strains, whereas 3357 (65.35%) constituted families containing accessory gene functions only present in some strains. The comparative genomic analysis was also performed among the three genomes and the model probiotic L . plantarum WCFS1 to compare protein families among the four genomes. The analysis returned a total of 3080 protein families present across the four strains, of which 2013 (65.35%) constitute the core genome and are present in at least one copy across the examined strains, whereas 1067 (34.65%) constituted families containing accessory gene functions only present in some strains, indicating the three genomes share more genes with the model probiotic. The ANI value for our three isolates among themselves and between them and L. plantarum WCFS1 was also calculated. The ANI value for the L. plantarum 54B vs. 54C was found to be 99.9941%, indicating that they are the most related strains. The analysis of genetic variations of SNPs between closely related isolates L. plantarum 54B and 54C indicated that there are 111 single nucleotide variations (SNVs) between the genomes. Studies have suggested a set of genes associated with resistance to stress, active metabolism in the host, adhesion, and immunomodulation as genes with probiotic properties . Based on such findings, Carpi et al. generated an updated list of ‘probiotic marker genes’, including genes putatively accountable for stress resistance (osmotic, acid, oxidative, temperature), adhesion capacity, intestinal persistence and bile salt hydrolase activity. In total, 75 probiotic marker genes have been reported, of which about 70% correspond to genes located in the core/soft core genome , while 12 genes are located in the shell/cloud genome. Here, we analyzed the presence or absence of these probiotic marker genes in the genomes of the four strains evaluated in this work ( L. plantarum 54B, 54C, 55A, and WCFS1). As expected, most of the core genome ‘probiotic marker genes’ were found in L. plantarum 54B, 54C, 55A, and WCFS1 . The analysis also revealed the presence of the probiotic marker genes of the cloud genome, including genes for osmotic stress, bile and acid resistance, and gut persistence in the genomes studied . The genes bshA (bile salt hydrolase), gbuB (Glycine betaine/carnitine transport permease protein GbuB), gla2 (Glycerol facilitator-aquaporin gla), and xylA (xylose isomerase) were not detected in any of the four genomes. Bacteriocins represent a significant class of antimicrobial peptides synthesized by LAB. In this study, the antiSMASH system predicted four fundamental areas that produce bacteriocins and secondary metabolites in the genome of L. plantarum 55A; region 1.1 (RiPP: cyclic-lactone autoinducer; location: 67,119–87,824 nt; total: 20,706 nt); region 2.1 (RiPP: RiPP-like; location: 104,505–116,655 nt; total: 12,151 nt); region 17.1 (PKS: T3PKS; location: 1–28,896 nt; total: 28,896 nt) and region 25.1 (terpene: terpene; location: 2330–23,211 nt; total: 20,882 nt) . An application of the bacteriocin gene detection software BAGEL5 also revealed the presence of two areas of interest for sactipeptides and plantaricins in the genome of L. plantarum 55A. The genes encoding these sactipeptides are located at positions from 18,188 bp to 38,188 bp in the genome of L. plantarum 55A . The BAGEL5 analysis of the L. plantarum 55A also demonstrated the presence of several genes encoding bacteriocin-related proteins associated with the production and immunity to these compounds . There are six bacteriocin-like structures encoding genes, PlnA , PlnE , PlnF , PlnJ , PlnK , and PlnN, in the genome of L. plantarum 55A. These genes are located at positions from 101,924 bp to 131,362 bp. The antiSMASH system also predicted three fundamental areas that produce bacteriocins and secondary metabolites in the genome of L. plantarum 54B ( . The genome of L. plantarum 54C was also observed to have exactly the same bacteriocins and secondary metabolites profile as L. plantarum 54B; the minor differences included the location of the PKS region and its total nucleotide . The BAGEL5 analysis also showed the presence of an identical area of interest for sactipeptides in the genomes of L. plantarum 54B and 54C, with its location from 186,914 bp to 206,914 bp . Genome analysis with the KEGG (Kyoto Encyclopedia of Genes and Genomes) database revealed that all of our L. plantarum genomes harbor all genes required for riboflavin production, which was confirmed in the gene list annotated in the comparative systems service pathway analysis tool . The genome of L. plantarum 54C was shown to have the same genes involved in riboflavin production as L. plantarum 54B; the only difference is the location of Alkaline phosphodiesterase I (EC 3.1.4.1)/Nucleotide pyrophosphatase (EC 3.6.1.9) gene in which it starts on bp 34,650 for 54B and on 95,503 bp for 54C . The analysis also revealed the presence of identical enzymes of riboflavin production in the genome of L. plantarum 55A with those of L . plantarum 54B and 54C, the differences being the position of genes and duplication of 3,4-dihydroxy-2-butanone 4-phosphate synthase (EC 4.1.99.12)/GTP cyclohydrolase II (EC 3.5.4.25) gene in the genome of L. plantarum 55A . Genome analysis with the KEGG database also showed that strains L . plantarum 54C and 55A have all enzymes of the Folate biosynthesis pathway. Whereas, strain L . plantarum 54B lacks the enzyme E.C. 2.7.6.3 (2-amino-4-hydroxy-6-hydroxymethyl dihydropteridine diphosphokinase). Here, we report that the bv-brc.org analyses revealed the three strains and the model probiotic L. plantarum wcfs1 harbored genes coding for the protein tyrosine kinase and phosphoprotein phosphatase, which are associated with T cell receptor signaling pathways. The analysis of CAZymes revealed that the L. plantarum 54B and 54C genomes each contained 91 genes in the five CAZymes gene families: 36 GT, 41 GH, 2 AA, 3 CBMs, and 9 carbohydrate CE genes. At the same time, CAZymes analysis on the genome of L. plantarum 55A revealed that it contained 90 genes in the five CAZymes gene families: 31 GT, 47 GH, 2 AA, 2 CBM, and 8 CE genes. Previous genomic analysis found that L. plantarum has five different families of enzymes involved in carbohydrate metabolism: GH, GT, CE, CBM and AA enzymes . We found that the most abundant CAZymes genes in the L. plantarum strain genomes belonged to the GH family, followed by the GT and CE families. The most abundant GT families in L. plantarum strains were GT2 and GT4, whereas the most abundant GH families were found to be GH1, GH13, and GH25. Although L. plantarum is a species with QPS status, with the aim of using the strains under study in food applications, their genomes were evaluated to cover all the safety concerns as recommended by EFSA Guidance for the characterization of microorganisms used as food additives, in animal feed, and as producing organisms . Here, in our analysis, both ABRIcate and ResFinder revealed that the genomes of the strains under study harbored no antibiotic-resistant genes. ABRIcate and VFDB analyses also showed that the strains under study harbored no putative virulence factors. These findings suggest the strains’ potential safety for food and other applications. This study reports the draft genome sequence of three L. plantarum strains (54B, 54C, and 55A) isolated from the Ethiopian traditional cottage cheese samples and reported to have antipathogenic and immunostimulatory activities , with insights into the potential probiotic functions of these strains based on the presence of putative beneficial genes and absence of genes of safety concern. Notably, the food-dwelling L. plantarum strains studied are representative of LAB isolates that are naturally consumed at very high levels (~10 8 –10 9 per gram) in cottage cheese , and it is, therefore, crucial to understand the genetic makeup of these strains and their potential effect on the host. Via this in silico genomic analysis, this study aimed to obtain insights into the key genes and predict the functionality and safety concerns of these strains to foster future studies and applications. Assembly of the raw reads led to the generation of bacterial chromosomes, each with a size similar to that reported before for sequenced L. plantarum isolates (range: 3–3.6 Mbp), which is high compared to the results for other LAB . With the completeness of 99.07% for all the genomes sequenced and the sequences produced 101–104% of the median total length of L. plantarum genomes, we report a genome sequence of comparable size with other genomes of the bacterium. The phylogenetic analysis assessed the genetic relatedness between the three strains and 23 complete sequences of L. plantarum . The relationship between strain origin and gene content was assessed by analyzing the protein families. The result of this analysis indicates two major findings: first, there is significant diversity among the genomes of L. plantarum strains since 65.35% of the protein families assessed corresponded with the variable genome, and, second, no origin of isolation-dependent grouping was recorded among the strains. Particularly, one of the three strains examined was positioned in a different clade, although it was isolated from the same source. These findings are in line with another study that evaluated 42 strains isolated from different sources to study the link between intra-species genetic variability and their environmental origin. Using 127 full genomes from open-access archives, a recent study performed a thorough pan-genomic analysis of L. plantarum . Based on previous works suggesting that genes linked to stress tolerance, active metabolism in the host, adhesion, and intestinal persistence are involved in the beneficial properties of lactobacilli, this study identified 75 “probiotic marker” genes in the genomes of L. plantarum . Of the 75 “probiotic marker” genes, 70% corresponded to genes located in the core and soft-core genome. Most of these genes were found in all the L. plantarum genomes used in this study . On the other hand, 12 genes (bshA, ClpP1 , oppA3 , oppA4 , xylA , srtA , gbuB , gla2 , dps , glf , glpF1, and cbh/bsh ) were located in the shell and cloud genome, showing that they were only present in some strains . The genes gbuB , bshA , gla2, and xylA were not detected in any of the genomes evaluated. In fact, this study did not associate a gene or group of genes with documented probiotic functions for the L. plantarum strains. For instance, the list of strains that harbored any of these 12 genes did not include the model probiotic L. plantarum WCFS1 . Actually, L. plantarum carries four bsh genes ( bsh 1, bsh 2, bsh 3, and bsh 4), which could function in place of bshA , and the opuA operon from Bacillus subtilis is homologous to the gbu operon . In this study, the genomes harbored genes coding for proteins involved in withstanding environmental stresses. This also supports our previous findings that the strains possess the ability to tolerate acidic conditions (e.g., at pH 3) and high bile salt concentration (0.5%) . The CAZy data set anticipated five significant classes of sugars in the genomes of the strains under study, i.e., GTs, GHs, CEs, CBMs, and AAs. The existence of these CAZymes helps the strains in survival, competitiveness, and persistence within the host. Because these genes are involved in the metabolism and assimilation of complex non-digestible carbohydrates, they are crucial for the bacteria’s adaptation to the gastrointestinal environment and its interaction with the host . GTs are crucial for the catalysis of the transfer of sugars from activated donor molecules to acceptors and are very important for the formation of surface structures, which are recognized by host immune systems . GTs can also produce structures similar to mucins by making O-linked glycosylations on serine residues . CBMs can enhance the catalytic activity of the CAZymes on the substrate by binding to the substrate of the CAZymes . Hence, we assume that the existence of these CAZymes helps the strains in their survival, competitiveness, and persistence within the host. In our previous study, we have shown that these strains have strong immunostimulatory activity in the human THP1-Dual™ reporter monocytes through activation of nuclear factor kappa B (NF-κB) and interferon regulatory factor (IRF) pathways . This could possibly be attributed to the production of EPS associated with the T cell receptors signaling pathway by the enzyme-coding genes harbored in the genome of the strains under study. Four fundamental regions in the genome of L. plantarum 55A were identified to produce bacteriocins and secondary metabolites including cyclic-lactone autoinducer (postulated to have an effect in quorum sensing to assess their cell density to regulate the production of adhesins used for biofilm formation as well as enzymes involved in the utilization of different sugars) , RiPP-like molecules (exhibit antibacterial activity) , T3PKS (produce secondary metabolites with diverse biological activities, including antimicrobials) , and terpenes (have antimicrobial, antiparasitic, antiallergenic, antispasmodic, antihyperglycemic, anti-inflammatory, and immunomodulatory properties) . The identification of these four categories of compounds led to the notion that L. plantarum 55A does indeed have a potential for being used as a probiotic , although the exact beneficial role of these predicted properties remains to be ascertained in follow-up mechanistic studies. Our findings are in agreement with previous studies that hinted cyclic peptides, similarly to the cyclic lactone autoinducer peptide, govern critical pathways of signal transduction, further targeting the polysaccharide biosynthesis and sugar utilization enzymes. Another study also reported the same four regions from the L. plantarum 13-3 genome that were identified to produce four bacteriocins and secondary metabolites as those identified in the L. plantarum 55A genome. Similarly, the strains L. plantarum 54B and 54C had three identical regions in their genomes to produce bacteriocins and secondary metabolites (T3PKS, terpene, and cyclic-lactone autoinducer). The bacteriocin gene-detection software BAGEL v5 also showed the presence of two areas of interest for sactipeptides and plantaricins with antibacterial activities in the genome of L. plantarum 55A, indicating promising capability of the strain in future application as probiotics. The presence of six bacteriocin-like structures encoding genes ( PlnA , PlnE , PlnF , PlnJ , PlnK , and PlnN ) in the genome of L. plantarum 55A makes the strain similar to L. plantarum C11 in that respect. PlnA is a peptide pheromone that induces the production of two peptide bacteriocins, PlnEF and PlnJK . It also has a membrane-permeabilizing, strain-specific antimicrobial effect . PlnEF and PlnJK are class IIb two-peptide bacteriocins that require about equimolar amounts of the monomers in order to obtain maximal antimicrobial activity . Meanwhile, no function could be determined for the protein encoded by PlnN . A generally applicable operational definition of strain with a strong biological basis has not yet been provided and may not exist . In theory, genomes with as little as one SNV difference may be considered to be different strains. Nonetheless, because of the overwhelming amount of strains that would result from metagenomic data, this method is not usually recommended . There are no standards governing how many SNVs constitute a different strain or whether such SNVs must be fixed in the population or affect phenotypic . Some authors set a cut-off of less than or equal to two SNV differences , while others set an ANI value of greater than 98% for isolates to be considered to come from the same natural strain. The genomes of our two closely related strains, L. plantarum 54B and 54C (isolated from the same fermentation and the same plate), had the same (number, family) CAZymes profiles, three identical BGCs, and an ANI value of 99.9941%. However, the genomic analyses revealed differences in the location of BGCs and high SNV differences (111), indicating that these isolates are closely related but different strains. This finding also meets the regulatory requirement set in the EFSA Guidance document and the EFSA’s statement for an unequivocal taxonomic identification at the strain level. LAB can also enhance the nutritional content of fermented foods by producing vitamins and cofactors, which contribute to functional food. Riboflavin, also called vitamin B2, is a water-soluble vitamin that serves as the precursor of the two essential coenzymes flavin adenine dinucleotide (FAD) and flavin mononucleotide (FMN), which are essential in the redox reactions within the cell. The riboflavin biosynthetic pathway contains seven distinct enzymes, namely, GTP cyclohydrolase II, 3,4-dihydroxy-2-butanone 4-phosphate synthase, pyrimidine deaminase/reductase, phosphatase, lumazine synthase, and riboflavin synthase; these catalyze the reaction, starting with one guanosine triphosphate (GTP) and two molecules of ribulose 5-phosphate (Ru5P) as the initial precursors . Humans are not able to synthesize vitamin B2, so it must be obtained from diet . Riboflavin can be produced by many microorganisms, including fungi (such as yeast) and bacteria. Indeed, the complete genome sequence of L. plantarum SK151 isolated from kimchi is reported to harbor a complete rib operon , and the Limosilactobacillus fermentum KUB-D18 genome is also reported to contain genes majorly involved in the metabolism of cofactors and vitamins including riboflavin . Vitamin production by LAB varies considerably, being a species-specific or strain-dependent trait. Hence, the genetic information for riboflavin biosynthesis is species and/or strain-specific traits in LAB . As a result, the inability to produce riboflavin by LAB is not an uncommon phenomenon. For example, the sequenced genome of the model probiotic L. plantarum strain WCFS1 contains an incomplete rib operon . The genomes analyzed in the present work, however, harbored all genes required for riboflavin production, suggesting the isolates’ potential as probiotics. Genome analysis with the KEGG database also showed that the strains L . plantarum 54C and 55A have all enzymes of the Folate biosynthesis pathway, indicating the strains’ ability in the application of food fortification. Finally, one of the most important findings of this study is the lack of a resistome and virulome in the strains studied, and this is consistent with another study that reported the non-pathogenicity of the L. plantarum strain . It is very important to verify that LAB strains to be consumed as a probiotic lack virulence factors and acquired antimicrobial resistance properties prior to considering them safe for human and animal consumption . The natural resistance of probiotic strains may be advantageous as it promotes both therapeutic and preventive benefits when concomitantly administered with antibiotics and facilitates intestinal microbiota recovery . Overall, the lack of a resistome and virulome, in addition to the previously confirmed in vitro functional capabilities of the strains, opens an avenue for a wide spectrum of research with regard to human health-related applications of the bacteria. This study reported the genome sequences of three L. plantarum strains isolated from Ethiopian traditional cottage cheese, a rich source of the LAB strains. The results obtained in this study, with the previous in vitro research conducted, demonstrate the potential of cheese-origin L. plantarum strains as candidate probiotics. The in silico genomic analysis of the strains revealed the presence of putative gene clusters coding for stress resistance, adhesion, cyclic lactone autoinducer, terpenes, T3PKS, and RiPP-like gene clusters, as well as sactipeptides and plantaricins, evidencing their role as probiotics. The genomes under study also harbor all genes required for riboflavin production. Moreover, none of the strains evaluated proved to have antibiotic resistance genes or virulence factors, suggesting their potential safety for probiotic applications. Collectively, the genomic information guarantees the safe use of these strains as probiotics and opens new possibilities to exploit the health-promoting potential of the strains.
Action-driven remapping of hippocampal neuronal populations in jumping rats
2f1b2490-dce0-45c1-a103-6b3af62e1382
9245695
Physiology[mh]
We trained rats to walk back and forth on a linear track (1.8 m) for water reward, available at a platform at each end of the track . After they moved comfortably and steadily on the track, the rats were trained to jump a gap within the track. During this shaping period, the gap distance was gradually increased from 5 to 30 cm. Following these pretraining sessions (5 to 15 d in different rats), the formal jump sessions started. A 30-cm gap between two parts of the track were created at one of three fixed locations on the track . After 7 to 20 prejump control trials (no gap), a gap was introduced, requiring the rat to jump across it in both directions of travel. After 20 trials, the position of the gap position was shifted, and after the second block of 20 trials, a third gap position was introduced. The order of the gap positions varied randomly across sessions. After the jump trials (a trial equaled travel in one direction), the rat ran post-jump control trials without a gap until it was satiated. Each animal was implanted with silicon probes above the dorsal hippocampal CA1 pyramidal layer. The recording headstage also contained an accelerometer, which allowed continuous monitoring of the x , y , and z positions; velocity; and acceleration of the rat’s head . In addition, head position was tracked with an OptiTrack system including six infrared cameras that allowed for the three-dimensional reconstruction of the animals’ head position and head orientation to within 1 mm at 120 Hz ( Materials and Methods ). Behavioral Observations. Traversing the track on control trials took a median of 2.0 s at a median speed of 76 cm/s ( n = 2,212) ( SI Appendix , Fig. 1 ). Rats learned to jump the gaps quickly at high efficiency. One rat fell once during training, but never during recordings, over the course of more than 3,000 jumps in 28 sessions across the 4 rats (1 m) from track to floor. Wild rats can fall from a height of 50 feet (∼15 m) without getting hurt, corresponding to >99.9% efficacy . The act of jumping can be divided into four distinguishable phases: preparation, takeoff, flight, and landing . During preparation, the rat aligned its four feet to the edge of the track, lowered its head steadily, or moved its head up and down a few times ( Movie S1 ). Such self-generated head bobbing produces retinal motion of the image of the landing platform and is critical to determine the desired distance of the jump . The time of takeoff was determined from the accelerometer reading as the peak of the second derivative of the accelerometer reading of the horizontal direction . Similarly, the location and time of landing on the track were determined from the minimum of the second derivative of the accelerometer reading of the vertical direction ( and SI Appendix , Fig. 1 ). The duration of preparation phase was longest at the middle gap (gap 2) in both directions of travel ( ; mean ± S.D. wait time: gap 1 = 1.04 ± 0.52 s, gap 2 = 1.31 ± 0.49 s, and gap 3 = 1.14 ± 0.67). Flight time (i.e., time in air) was consistent across sessions and rats ( ; mean ± S.D. flight time: jump 1= 0.17 ± 0.03 s [ n = 1,536]; jump 2 = 0.18 ± 0.04 s; jump 3 = 0.19 ± 0.05 s). During the flight, the peak velocity exceeded 1.7 m/s, and the velocity and acceleration profiles varied stereotypically across trials ( and SI Appendix , Fig. 1 ; mean ± S.D. velocity: jump 1 = 176 ± 33 cm/s; jump 2 = 171 ± 48 cm/s; jump 3 = 164 ± 51 cm/s). The rat walked relatively slowly prior to jump and maintained momentum after jumping by galloping after landing ( ; before jump = 57 m/s; after jump = 120 m/s; P = 0 by Kruskal-Wallis test). Galloping was relatively stereotyped at ∼5 Hz, as illustrated by the vertical bands in . This difference in running patterns likely reflected the animal’s increased confidence after the jump, since it did not depend on the available travel distance before or after the jump. Local Field Potential and Population Firing-Rate Correlates of Gap Jumping. The theta frequency in the local field potential (LFP) increased in frequency and power during jumping, reaching a maximum frequency (9.3 ± 1.0 Hz) during the flight phase and after landing, in time with the rat’s velocity and firing rates of interneurons . Interestingly, the rate changes of interneurons and pyramidal cells in CA1 preceded those in CA3 ( SI Appendix , Fig. 2 ), implying that CA1 activity was not inherited from CA3 neurons. Theta frequency during postjump galloping was significantly higher than during prejump walking . The firing rates of interneurons showed a relatively linear relationship with speed during running , but this relationship became nonlinear at very high speeds , associated with jumping . Jumping reset the phase of theta waves. The reliability of theta-phase reset was quantified by the phase consistency across trials ( SI Appendix , Materials and Methods and Fig. 10 ), referenced to the moment of takeoff ( and ). Theta-phase reset was also visible by the increased phase consistence of interneuron spiking . The consequence of theta-phase reset was the persisting phase consistency for several theta cycles, although significant phase consistency was present only for three cycles, surrounding the jump . Given this short duration and the presence of only one or two theta cycles during the jump itself, we could not determine precisely whether the reset occurred during the takeoff or the flight. Neuronal Population-Measure Correlates of Gap Jumping. We examined encoding of the jump by computing population vector correlations, which tell us how hippocampal activity is correlated in space. The population vector correlations were computed by first z -scoring the trial-averaged firing rate for a given position of each neuron with mean activity above a noise threshold in at least one jump or control condition and then calculating the correlation across neurons between their activities at each combination of position bins . Average population vector correlation decreased to near zero within <40 cm of separation , as reported previously in similar situations . The population vector correlation analysis includes all active neurons and, therefore, is not biased by the definition of a place cell. In contrast to the high spatial correlation across control trials, spatial correlations of CA1 population activity between control and jump trials showed a strong reduction, except at the two extreme ends of the track. Importantly, the decreased spatial correlation was not restricted to the gap area; it was already low prior to the jump and persisted after the jump . Comparison of jump trials across different gap conditions also yielded low spatial correlations , indicating remapping of neuronal activity across jump trials as well. Yet, in contrast to the steady near-zero correlations between control and jump trials , comparison of jump trials revealed a visible increase of population vector correlations at the jump location across gap positions . This observation implies the possibility of the presence of neurons, which represent different gaps similarly (see remarks about jump cells in Discussion ). Population representation and changes across conditions were remarkably similar between CA1 and CA3 populations . These findings are in agreement with previous analyses showing that population vector correlation is a more sensitive measure than population firing-rate analysis (compare and ) . Single Neuron-Firing Correlates of Gap Jumping. The population analysis suggested that the act of jumping resulted in a combination of rate and global remapping of place fields of individual hippocampal pyramidal neurons in virtually the entire track, even though the distant spatial cues and motor behavior remained similar before and, to a great extent, after the jump itself. In a typical session, only a small fraction of neurons displayed stable place fields with unaltered rates during jump trials, whereas the overwhelming majority of place fields were modified one way or another. In addition to the minority stable neurons (group 1), neurons with a modified firing pattern could be classified by subjective criteria into the following four major groups: group 2, novel place fields; group 3, neurons whose rates were decreased (truncated or attenuated); group 4, neurons with increased (amplified) rates during jump trials; and group 5, jump cells. Each of these changes could occur before, during, or after the jump (gap). Since hippocampal pyramidal cells form different place fields and sequences during opposite direction runs on linear tracks , we evaluated place fields separately on left and right travels. Stable place fields were found only at the start and end locations of the track ( and SI Appendix , Fig. 3 ). Neurons with novel place fields (group 2) did not have a place field on prejump control trials, and the new field could occur anywhere on the track, including areas before, after, and even during the jump ( and SI Appendix , Fig. 3 ). Neurons in group 3 had an existing place field in control trials, which was suppressed in jump trials. Spike suppression was typically largest when the place field coincided with the gap, but suppression was also present when the gap was either before or after the location of the place field ( and SI Appendix , Fig. 4 ). When the gap coincided with the beginning of the place field, the remaining part of the place field was still expressed ( , fourth graph down in column). Firing rates of group 4 place cells increased on jump trials. Similar to the attenuated group, enhanced spiking could occur not only during gap jumping itself but also before or after the gap, and the center of the place field moved either toward or away from the gap ( and SI Appendix , Fig. 5 ). The distinction of the attenuated and amplified groups is somewhat arbitrary since spike rate decrease and increase could be observed within the same place cell with different jump locations ( SI Appendix , Figs. 4 and 5 ). Neurons classified as jump cells (group 5) also meet the criterion of a novel place field, because firing at the future place field was absent during control trials. However, criteria for jump cells also included the requirements that they fired in relationship with two or three gaps and their fields moved with the gap on different trial types. Jumps cells occurred not only during jumping but could be present either before or after the gap ( and SI Appendix , Fig. 6 ), suggesting that the motor action of jumping was not the necessary driver of spiking. When the jump cell occurred prior to departure, its duration lasted throughout the 1- to 1.5-s wait time, during which its spikes underwent a gradual phase precession . With the exception of one neuron ( SI Appendix , Fig. 7 ), jump fields occurred only during either left or right, but not both, direction of travel. Therefore, a more appropriate description of these neurons would be “conjunction cells of gap and head direction.” Although the error-prone nature of subjective classification of firing pattern types are acknowledged, the distribution of these five groups was similar in both CA1 and CA3 regions , implying a relatively random redistribution of the neuron pool in the context of a modified map. Overall, the analysis of individual neurons supports the remapping conclusion by the population vector analysis. In addition to the aforementioned five groups, the remaining putative pyramidal neurons either had very few spikes to quantify place fields or displayed scattered and nonpatterned firing throughout the track (group 6: unclassified or “other” neurons; SI Appendix , Fig. 8 ). Yet, firing fields of some of these unclassified neurons were also modified in jump trials ( SI Appendix , Fig. 8 ). Fast-firing, putative interneurons showed a correlation with running speed , but the relationship between speed and interneuron firing rate was also modified by context, as demonstrated by the different interneuron firing rates at different gaps and the difference between left and right travels ( SI Appendix , Fig. 9 ). Remapping and the Internal Temporal/Theta-Phase Structure of Hippocampal Populations. Despite the large changes of firing patterns of individual neurons, the population structure of the neurons remained similar. First, we investigated the distance relationship between place fields and theta timescale-related timing of place-cell pairs by calculating the compression index . The compression index quantifies the relationship between travel distances (or travel time) between the peaks of overlapping place fields and the time (phase) offset of their spikes within theta cycles. When the time and phase offsets of the theta timescale cross-correlograms were plotted as a function of the distance between the peaks of their firing fields, we found that despite the extensive firing-rate modification of individual place fields, the compression index remained similar between control and jump trials ( ; for detection of theta phase, see SI Appendix , Fig. 10 ), indicating that fine-timescale properties of modified spike sequences within place-coding cell assemblies were preserved despite jump perturbation. Another measure of place coding is the relationship between the animal’s instantaneous position and the theta phase at which the neuron spikes [i.e., phase precession ]. Phase precession describes the association between a linear variable (position on track) and a circular phase (theta cycle) variable . Despite the prominent spike-rate modulation and moderate place-field shape distortions during jump trials, the position–theta-phase spike relationship remained invariant , implying that theta phase is a more reliable “code” of position than a firing rate code . This invariance was a result of reduced place-field size and altered slope [radian or field width ] during jump trials . The average reduction of place-field size was likely a result of the several place fields silenced midfield due to the presence of the gap . In a few examples, the major part of the place field coincided with the gap, and spiking was completely suppressed during the jump. Despite the absence of spikes for the major part of the place field, when firing resumed at the end of the field, the theta-phase assignment of spikes was the same as during control trials ( and SI Appendix , Fig. 10 ) . Traversing the track on control trials took a median of 2.0 s at a median speed of 76 cm/s ( n = 2,212) ( SI Appendix , Fig. 1 ). Rats learned to jump the gaps quickly at high efficiency. One rat fell once during training, but never during recordings, over the course of more than 3,000 jumps in 28 sessions across the 4 rats (1 m) from track to floor. Wild rats can fall from a height of 50 feet (∼15 m) without getting hurt, corresponding to >99.9% efficacy . The act of jumping can be divided into four distinguishable phases: preparation, takeoff, flight, and landing . During preparation, the rat aligned its four feet to the edge of the track, lowered its head steadily, or moved its head up and down a few times ( Movie S1 ). Such self-generated head bobbing produces retinal motion of the image of the landing platform and is critical to determine the desired distance of the jump . The time of takeoff was determined from the accelerometer reading as the peak of the second derivative of the accelerometer reading of the horizontal direction . Similarly, the location and time of landing on the track were determined from the minimum of the second derivative of the accelerometer reading of the vertical direction ( and SI Appendix , Fig. 1 ). The duration of preparation phase was longest at the middle gap (gap 2) in both directions of travel ( ; mean ± S.D. wait time: gap 1 = 1.04 ± 0.52 s, gap 2 = 1.31 ± 0.49 s, and gap 3 = 1.14 ± 0.67). Flight time (i.e., time in air) was consistent across sessions and rats ( ; mean ± S.D. flight time: jump 1= 0.17 ± 0.03 s [ n = 1,536]; jump 2 = 0.18 ± 0.04 s; jump 3 = 0.19 ± 0.05 s). During the flight, the peak velocity exceeded 1.7 m/s, and the velocity and acceleration profiles varied stereotypically across trials ( and SI Appendix , Fig. 1 ; mean ± S.D. velocity: jump 1 = 176 ± 33 cm/s; jump 2 = 171 ± 48 cm/s; jump 3 = 164 ± 51 cm/s). The rat walked relatively slowly prior to jump and maintained momentum after jumping by galloping after landing ( ; before jump = 57 m/s; after jump = 120 m/s; P = 0 by Kruskal-Wallis test). Galloping was relatively stereotyped at ∼5 Hz, as illustrated by the vertical bands in . This difference in running patterns likely reflected the animal’s increased confidence after the jump, since it did not depend on the available travel distance before or after the jump. The theta frequency in the local field potential (LFP) increased in frequency and power during jumping, reaching a maximum frequency (9.3 ± 1.0 Hz) during the flight phase and after landing, in time with the rat’s velocity and firing rates of interneurons . Interestingly, the rate changes of interneurons and pyramidal cells in CA1 preceded those in CA3 ( SI Appendix , Fig. 2 ), implying that CA1 activity was not inherited from CA3 neurons. Theta frequency during postjump galloping was significantly higher than during prejump walking . The firing rates of interneurons showed a relatively linear relationship with speed during running , but this relationship became nonlinear at very high speeds , associated with jumping . Jumping reset the phase of theta waves. The reliability of theta-phase reset was quantified by the phase consistency across trials ( SI Appendix , Materials and Methods and Fig. 10 ), referenced to the moment of takeoff ( and ). Theta-phase reset was also visible by the increased phase consistence of interneuron spiking . The consequence of theta-phase reset was the persisting phase consistency for several theta cycles, although significant phase consistency was present only for three cycles, surrounding the jump . Given this short duration and the presence of only one or two theta cycles during the jump itself, we could not determine precisely whether the reset occurred during the takeoff or the flight. We examined encoding of the jump by computing population vector correlations, which tell us how hippocampal activity is correlated in space. The population vector correlations were computed by first z -scoring the trial-averaged firing rate for a given position of each neuron with mean activity above a noise threshold in at least one jump or control condition and then calculating the correlation across neurons between their activities at each combination of position bins . Average population vector correlation decreased to near zero within <40 cm of separation , as reported previously in similar situations . The population vector correlation analysis includes all active neurons and, therefore, is not biased by the definition of a place cell. In contrast to the high spatial correlation across control trials, spatial correlations of CA1 population activity between control and jump trials showed a strong reduction, except at the two extreme ends of the track. Importantly, the decreased spatial correlation was not restricted to the gap area; it was already low prior to the jump and persisted after the jump . Comparison of jump trials across different gap conditions also yielded low spatial correlations , indicating remapping of neuronal activity across jump trials as well. Yet, in contrast to the steady near-zero correlations between control and jump trials , comparison of jump trials revealed a visible increase of population vector correlations at the jump location across gap positions . This observation implies the possibility of the presence of neurons, which represent different gaps similarly (see remarks about jump cells in Discussion ). Population representation and changes across conditions were remarkably similar between CA1 and CA3 populations . These findings are in agreement with previous analyses showing that population vector correlation is a more sensitive measure than population firing-rate analysis (compare and ) . The population analysis suggested that the act of jumping resulted in a combination of rate and global remapping of place fields of individual hippocampal pyramidal neurons in virtually the entire track, even though the distant spatial cues and motor behavior remained similar before and, to a great extent, after the jump itself. In a typical session, only a small fraction of neurons displayed stable place fields with unaltered rates during jump trials, whereas the overwhelming majority of place fields were modified one way or another. In addition to the minority stable neurons (group 1), neurons with a modified firing pattern could be classified by subjective criteria into the following four major groups: group 2, novel place fields; group 3, neurons whose rates were decreased (truncated or attenuated); group 4, neurons with increased (amplified) rates during jump trials; and group 5, jump cells. Each of these changes could occur before, during, or after the jump (gap). Since hippocampal pyramidal cells form different place fields and sequences during opposite direction runs on linear tracks , we evaluated place fields separately on left and right travels. Stable place fields were found only at the start and end locations of the track ( and SI Appendix , Fig. 3 ). Neurons with novel place fields (group 2) did not have a place field on prejump control trials, and the new field could occur anywhere on the track, including areas before, after, and even during the jump ( and SI Appendix , Fig. 3 ). Neurons in group 3 had an existing place field in control trials, which was suppressed in jump trials. Spike suppression was typically largest when the place field coincided with the gap, but suppression was also present when the gap was either before or after the location of the place field ( and SI Appendix , Fig. 4 ). When the gap coincided with the beginning of the place field, the remaining part of the place field was still expressed ( , fourth graph down in column). Firing rates of group 4 place cells increased on jump trials. Similar to the attenuated group, enhanced spiking could occur not only during gap jumping itself but also before or after the gap, and the center of the place field moved either toward or away from the gap ( and SI Appendix , Fig. 5 ). The distinction of the attenuated and amplified groups is somewhat arbitrary since spike rate decrease and increase could be observed within the same place cell with different jump locations ( SI Appendix , Figs. 4 and 5 ). Neurons classified as jump cells (group 5) also meet the criterion of a novel place field, because firing at the future place field was absent during control trials. However, criteria for jump cells also included the requirements that they fired in relationship with two or three gaps and their fields moved with the gap on different trial types. Jumps cells occurred not only during jumping but could be present either before or after the gap ( and SI Appendix , Fig. 6 ), suggesting that the motor action of jumping was not the necessary driver of spiking. When the jump cell occurred prior to departure, its duration lasted throughout the 1- to 1.5-s wait time, during which its spikes underwent a gradual phase precession . With the exception of one neuron ( SI Appendix , Fig. 7 ), jump fields occurred only during either left or right, but not both, direction of travel. Therefore, a more appropriate description of these neurons would be “conjunction cells of gap and head direction.” Although the error-prone nature of subjective classification of firing pattern types are acknowledged, the distribution of these five groups was similar in both CA1 and CA3 regions , implying a relatively random redistribution of the neuron pool in the context of a modified map. Overall, the analysis of individual neurons supports the remapping conclusion by the population vector analysis. In addition to the aforementioned five groups, the remaining putative pyramidal neurons either had very few spikes to quantify place fields or displayed scattered and nonpatterned firing throughout the track (group 6: unclassified or “other” neurons; SI Appendix , Fig. 8 ). Yet, firing fields of some of these unclassified neurons were also modified in jump trials ( SI Appendix , Fig. 8 ). Fast-firing, putative interneurons showed a correlation with running speed , but the relationship between speed and interneuron firing rate was also modified by context, as demonstrated by the different interneuron firing rates at different gaps and the difference between left and right travels ( SI Appendix , Fig. 9 ). Despite the large changes of firing patterns of individual neurons, the population structure of the neurons remained similar. First, we investigated the distance relationship between place fields and theta timescale-related timing of place-cell pairs by calculating the compression index . The compression index quantifies the relationship between travel distances (or travel time) between the peaks of overlapping place fields and the time (phase) offset of their spikes within theta cycles. When the time and phase offsets of the theta timescale cross-correlograms were plotted as a function of the distance between the peaks of their firing fields, we found that despite the extensive firing-rate modification of individual place fields, the compression index remained similar between control and jump trials ( ; for detection of theta phase, see SI Appendix , Fig. 10 ), indicating that fine-timescale properties of modified spike sequences within place-coding cell assemblies were preserved despite jump perturbation. Another measure of place coding is the relationship between the animal’s instantaneous position and the theta phase at which the neuron spikes [i.e., phase precession ]. Phase precession describes the association between a linear variable (position on track) and a circular phase (theta cycle) variable . Despite the prominent spike-rate modulation and moderate place-field shape distortions during jump trials, the position–theta-phase spike relationship remained invariant , implying that theta phase is a more reliable “code” of position than a firing rate code . This invariance was a result of reduced place-field size and altered slope [radian or field width ] during jump trials . The average reduction of place-field size was likely a result of the several place fields silenced midfield due to the presence of the gap . In a few examples, the major part of the place field coincided with the gap, and spiking was completely suppressed during the jump. Despite the absence of spikes for the major part of the place field, when firing resumed at the end of the field, the theta-phase assignment of spikes was the same as during control trials ( and SI Appendix , Fig. 10 ) . Jumping produced a stereotypic behavior associated with consistent electrophysiological patterns, including phase reset of LFP theta, global firing-rate changes, and population vector shifts of hippocampal neurons. These collective neuronal patterns were associated with modified firing of individual hippocampal neurons, appearance of novel place fields, and the emergence of jump-specific cells. Despite large changes in firing rates, the theta phase versus animal-position relationships of place cells remained stable. Thus, planning and executions of actions are as effective in altering hippocampal neuronal organization as are environmental cues. Resetting Theta Oscillations. A reliable relationship between various motor actions and the phase of the theta cycle has previously been reported . In our experiments, jumping was associated with LFP theta-phase reset. The flight time itself was <0.2 s, typically one or two theta cycles, but the trial-to-trial phase consistency persisted for several theta cycles after the jump. The phase reset coincided with an increase of both theta amplitude and frequency, as also reported during vertical jumping . The phase reset was also evident from the session average of interneuron firing, essentially tracking the phase-shifted LFP theta cycles. Finally, when the gap coincided with the place field of a pyramidal neuron, a sudden phase shift in spike–theta-phase preference was detected. This sudden shift may have occurred because in one or two theta cycles, the rat’s head was displaced by 30 cm, thus maintaining the relationship between spatial cues and spike–theta-phase relationship . Alternatively, the phase shift might reflect the altered relationship between spiking of the jump cell and the phase-shifted reference LFP theta. Another issue that has remained ambiguous is whether jumping induced phase resetting of theta oscillations or whether the timing of jumping was biased by the phase of the theta cycle . The theta reset might have been brought about the corollary discharge from the motor command signal or from the sensory feedback of muscle activity . Reorganization of the Hippocampal Map by Jumping. Firing sequences of hippocampal neurons during exploration in one- or two-dimensional environments have been assumed to be driven by internal mechanisms . For each environment, a new map with different firing patterns is retrieved, due to shifting from one neuronal “attractor” to another , known as remapping or as an altered manifold . Under some conditions, reorganization is manifested as a change in firing rates at the same locations (“rate remapping”), whereas between distinct environments, new place fields appear and old ones disappear or move to a new location [“global remapping” ]. Remapping can be induced not only by spatial manipulations and changes in sensory inputs but also by motivational state , experience , and memory load . In our experiments, nearly all place fields, with the exception of a small fraction of stable place fields at the two ends of the track, likely anchored to the start, or goal, platforms , were modified one way or another. In jump trials, firing pattern reorganization occurred as a combination of global and rate remapping. To explain these changes, one can make the argument that the absence of a piece of the track is a local sensory cue and, thus, this experimenter-induced change is the sole explanation of remapping . However, local cues are not expected to induce such a strong remapping as we have observed in our experiments. Several previous studies have examined the impact of overt or hidden local cues on hippocampal neuronal firing. These experiments vary from reporting no effect to reporting moderate effects on place-field activity, including place-field enrichment, smaller place fields, increased spatial tuning (i.e., firing-rate increase), altered phase precession, and increased or reduced spatial encoding . Common to these previous experiments (but see refs. and ) is that the described firing-pattern alterations were confined to the particular segment of the environment where the density of visual and tactile cues was enriched. In contrast, in our experiments, remapping occurred virtually on the entire track. In fact, firing-pattern changes were similar at, near, and far from the gap, implying that, in jump trials, the brain regarded the track differently from the same track in control trials without a gap. The implication is that from the beginning of each trial, a unique attractor, containing partially the same neurons, was retrieved, depending on whether it was a control trial or a jump trial with different gap locations. This hypothesis is supported by the relatively equal probability distribution of the five groups of place fields of both CA1 and CA3 neurons on jump trials, including persisting place fields, novel place fields, place fields with increased or decreased firing rates, and jump cells. These findings suggest that the expected different action plan (i.e., preparation for jumping) from the beginning of the trial exerts a stronger internal impact on the hippocampus than do changes in local cues. Neurons whose firing rates were attenuated (group 3) or amplified (group 4) during jump trials may be related (or identical) to “object-location memory” cells . When objects are removed from an environment or repositioned, object-location memory cells, located in both CA1 and CA3 regions, increase their firing at the locations where the objects used to be . Similar to previous interpretations , rate remapping is an indication of conjunctive features of neurons, the ability to simultaneously encode multiple types of information . An explicit mechanism offered for such conjunctive coding is that the spike phase of theta is primarily responsible for locating the animal in the environment , whereas the independent firing rate is available to code for other features, such as speed, objects, goals, or motivation . This hypothesis is supported by our observations that despite the strong changes in firing rates, the theta-spike phase versus animal-position relationship remained unaltered during jump trials. According to the classic theory, place fields are induced and maintained by a combination of sensory inputs and intracellular mechanisms . An alternative view is that neuronal sequences are self-organized at the circuit levels and such preexisting sequences are matched to particular environments . Our observation that the compression of distance to theta-phase (time) offset between place-cell pairs remained unaltered during jump trials, despite changes in firing patterns of individual place cells, supports the internally organized hippocampal model. Previous experiments also found that theta time offset of spikes between place-cell pairs remains fixed, despite increasing distances between place-field peaks in larger environments , running speed differences , or different temporal requirements . Yet, action- and environment-anchored reference frames can coexist in the hippocampus . Cases where the preexisting place field coincided with the location of jumping provide insight into the triggering mechanisms and maintenance of place fields. In a few group 3 neurons, the beginning or a large part of a place field was completely obliterated by the jump. Despite the absence of spikes in a large portion of the place field, spiking resumed after the jump and at the expected theta phases. These findings also support the internally organized model, which assumes that that spiking of hippocampal place cells within their fields is an assembly product . From this assembly point of view, we hypothesize that the place field in group 3 neurons was not abolished but suppressed by inhibition, and when the neuron was released from inhibition after the jump, it continued to fire together with its less-affected recorded and nonrecorded assembly peers. This hypothesis can explain how neurons that were silenced during the jump continued to fire at the same theta phase as during control trials. Sentinel Function of the Hippocampus. Remapping of place fields is often interpreted as an example of an operation in a dynamically changing circuit capable of updating neuronal assembly sequences and needed to support episodic memory . One can assume that jumping a gap and traveling back and forth on a linear track do not need the hippocampus (although we have not tested this directly); therefore, the observed sequential changes are not relevant to behavior. Yet, in line with our findings, previous studies have already shown that hippocampal neuron firing patterns do change in response to various environmental, motivation, and motor variables, even in situations which do not require the hippocampus or its allied structures . These studies are compatible with the hypothesis that a main function of the hippocampal system is to continuously monitor the activity of the neocortex and respond selectively to unexpected changes with appropriate selection of assembly sequences. In this sentinel function, the hippocampus perpetually compares the difference between neocortical neuronal messages and the reconstructed, predicted versions of those messages by the hippocampus . Only when the discrepancy between inputs and planned actions (“error”) is large does the hippocampal circuit induce new neuronal trajectories . Jump Cells. In a previous study, rats were trained to avoid an electric shock by jumping up onto the rim of a box with 33-cm walls . A fraction of hippocampal pyramidal neurons fired selectively during the vertical jump, and the authors suggested that these jump cells corresponded to place cells in the z (i.e., vertical) dimension. This contention is further supported by a report showing that during rearing, specific cells may become active at particular locations . Our findings allow for a different interpretation. First, in our experiments, the rats jumped horizontally and the elevation of their head during the flight was less than the length of the rat . Second, while jumps cells were active at the same z distance from the track, their x coordinate was different, yet neurons repeatedly fired at two or three gap locations and at different rates. Third, the majority of our jump cells did not coincide with the jump itself but were active during running either before or after the gap. Finally, virtually all jump cells were active in only one travel direction. These findings eliminate the possibility that jump cells were linked strictly to motoric actions. Such direction (or “context”) and location specificity suggests that jump cells are not fundamentally different from place cells but require the conjunction of a cue and an appropriate action plan. In the few jump cells whose fields coincided with the act of jumping itself, the phase preference of spikes moved from the peak to the trough of the theta waves in just one or two cycles. This observation is consistent with the hypothesis that jump cells possess the essential features of place cells , since the magnitude of a full theta-cycle phase precession corresponds to the length of the place field , irrespective how many theta cycles it takes for the animal to traverse the field [i.e., speed ]. Neurons designated as jump-specific cells in our study share several features of a related (or identical) class of neurons known as landmark-vector cells . Landmark-vector cells fire at a similar distance and direction from a landmark and may follow the landmark when moved. Similarly, our jump cells also displayed vectorial features, since they fired selectively at the gap or at a constant distance from the gap but only during either left- or right-bound travels. A reliable relationship between various motor actions and the phase of the theta cycle has previously been reported . In our experiments, jumping was associated with LFP theta-phase reset. The flight time itself was <0.2 s, typically one or two theta cycles, but the trial-to-trial phase consistency persisted for several theta cycles after the jump. The phase reset coincided with an increase of both theta amplitude and frequency, as also reported during vertical jumping . The phase reset was also evident from the session average of interneuron firing, essentially tracking the phase-shifted LFP theta cycles. Finally, when the gap coincided with the place field of a pyramidal neuron, a sudden phase shift in spike–theta-phase preference was detected. This sudden shift may have occurred because in one or two theta cycles, the rat’s head was displaced by 30 cm, thus maintaining the relationship between spatial cues and spike–theta-phase relationship . Alternatively, the phase shift might reflect the altered relationship between spiking of the jump cell and the phase-shifted reference LFP theta. Another issue that has remained ambiguous is whether jumping induced phase resetting of theta oscillations or whether the timing of jumping was biased by the phase of the theta cycle . The theta reset might have been brought about the corollary discharge from the motor command signal or from the sensory feedback of muscle activity . Firing sequences of hippocampal neurons during exploration in one- or two-dimensional environments have been assumed to be driven by internal mechanisms . For each environment, a new map with different firing patterns is retrieved, due to shifting from one neuronal “attractor” to another , known as remapping or as an altered manifold . Under some conditions, reorganization is manifested as a change in firing rates at the same locations (“rate remapping”), whereas between distinct environments, new place fields appear and old ones disappear or move to a new location [“global remapping” ]. Remapping can be induced not only by spatial manipulations and changes in sensory inputs but also by motivational state , experience , and memory load . In our experiments, nearly all place fields, with the exception of a small fraction of stable place fields at the two ends of the track, likely anchored to the start, or goal, platforms , were modified one way or another. In jump trials, firing pattern reorganization occurred as a combination of global and rate remapping. To explain these changes, one can make the argument that the absence of a piece of the track is a local sensory cue and, thus, this experimenter-induced change is the sole explanation of remapping . However, local cues are not expected to induce such a strong remapping as we have observed in our experiments. Several previous studies have examined the impact of overt or hidden local cues on hippocampal neuronal firing. These experiments vary from reporting no effect to reporting moderate effects on place-field activity, including place-field enrichment, smaller place fields, increased spatial tuning (i.e., firing-rate increase), altered phase precession, and increased or reduced spatial encoding . Common to these previous experiments (but see refs. and ) is that the described firing-pattern alterations were confined to the particular segment of the environment where the density of visual and tactile cues was enriched. In contrast, in our experiments, remapping occurred virtually on the entire track. In fact, firing-pattern changes were similar at, near, and far from the gap, implying that, in jump trials, the brain regarded the track differently from the same track in control trials without a gap. The implication is that from the beginning of each trial, a unique attractor, containing partially the same neurons, was retrieved, depending on whether it was a control trial or a jump trial with different gap locations. This hypothesis is supported by the relatively equal probability distribution of the five groups of place fields of both CA1 and CA3 neurons on jump trials, including persisting place fields, novel place fields, place fields with increased or decreased firing rates, and jump cells. These findings suggest that the expected different action plan (i.e., preparation for jumping) from the beginning of the trial exerts a stronger internal impact on the hippocampus than do changes in local cues. Neurons whose firing rates were attenuated (group 3) or amplified (group 4) during jump trials may be related (or identical) to “object-location memory” cells . When objects are removed from an environment or repositioned, object-location memory cells, located in both CA1 and CA3 regions, increase their firing at the locations where the objects used to be . Similar to previous interpretations , rate remapping is an indication of conjunctive features of neurons, the ability to simultaneously encode multiple types of information . An explicit mechanism offered for such conjunctive coding is that the spike phase of theta is primarily responsible for locating the animal in the environment , whereas the independent firing rate is available to code for other features, such as speed, objects, goals, or motivation . This hypothesis is supported by our observations that despite the strong changes in firing rates, the theta-spike phase versus animal-position relationship remained unaltered during jump trials. According to the classic theory, place fields are induced and maintained by a combination of sensory inputs and intracellular mechanisms . An alternative view is that neuronal sequences are self-organized at the circuit levels and such preexisting sequences are matched to particular environments . Our observation that the compression of distance to theta-phase (time) offset between place-cell pairs remained unaltered during jump trials, despite changes in firing patterns of individual place cells, supports the internally organized hippocampal model. Previous experiments also found that theta time offset of spikes between place-cell pairs remains fixed, despite increasing distances between place-field peaks in larger environments , running speed differences , or different temporal requirements . Yet, action- and environment-anchored reference frames can coexist in the hippocampus . Cases where the preexisting place field coincided with the location of jumping provide insight into the triggering mechanisms and maintenance of place fields. In a few group 3 neurons, the beginning or a large part of a place field was completely obliterated by the jump. Despite the absence of spikes in a large portion of the place field, spiking resumed after the jump and at the expected theta phases. These findings also support the internally organized model, which assumes that that spiking of hippocampal place cells within their fields is an assembly product . From this assembly point of view, we hypothesize that the place field in group 3 neurons was not abolished but suppressed by inhibition, and when the neuron was released from inhibition after the jump, it continued to fire together with its less-affected recorded and nonrecorded assembly peers. This hypothesis can explain how neurons that were silenced during the jump continued to fire at the same theta phase as during control trials. Remapping of place fields is often interpreted as an example of an operation in a dynamically changing circuit capable of updating neuronal assembly sequences and needed to support episodic memory . One can assume that jumping a gap and traveling back and forth on a linear track do not need the hippocampus (although we have not tested this directly); therefore, the observed sequential changes are not relevant to behavior. Yet, in line with our findings, previous studies have already shown that hippocampal neuron firing patterns do change in response to various environmental, motivation, and motor variables, even in situations which do not require the hippocampus or its allied structures . These studies are compatible with the hypothesis that a main function of the hippocampal system is to continuously monitor the activity of the neocortex and respond selectively to unexpected changes with appropriate selection of assembly sequences. In this sentinel function, the hippocampus perpetually compares the difference between neocortical neuronal messages and the reconstructed, predicted versions of those messages by the hippocampus . Only when the discrepancy between inputs and planned actions (“error”) is large does the hippocampal circuit induce new neuronal trajectories . In a previous study, rats were trained to avoid an electric shock by jumping up onto the rim of a box with 33-cm walls . A fraction of hippocampal pyramidal neurons fired selectively during the vertical jump, and the authors suggested that these jump cells corresponded to place cells in the z (i.e., vertical) dimension. This contention is further supported by a report showing that during rearing, specific cells may become active at particular locations . Our findings allow for a different interpretation. First, in our experiments, the rats jumped horizontally and the elevation of their head during the flight was less than the length of the rat . Second, while jumps cells were active at the same z distance from the track, their x coordinate was different, yet neurons repeatedly fired at two or three gap locations and at different rates. Third, the majority of our jump cells did not coincide with the jump itself but were active during running either before or after the gap. Finally, virtually all jump cells were active in only one travel direction. These findings eliminate the possibility that jump cells were linked strictly to motoric actions. Such direction (or “context”) and location specificity suggests that jump cells are not fundamentally different from place cells but require the conjunction of a cue and an appropriate action plan. In the few jump cells whose fields coincided with the act of jumping itself, the phase preference of spikes moved from the peak to the trough of the theta waves in just one or two cycles. This observation is consistent with the hypothesis that jump cells possess the essential features of place cells , since the magnitude of a full theta-cycle phase precession corresponds to the length of the place field , irrespective how many theta cycles it takes for the animal to traverse the field [i.e., speed ]. Neurons designated as jump-specific cells in our study share several features of a related (or identical) class of neurons known as landmark-vector cells . Landmark-vector cells fire at a similar distance and direction from a landmark and may follow the landmark when moved. Similarly, our jump cells also displayed vectorial features, since they fired selectively at the gap or at a constant distance from the gap but only during either left- or right-bound travels. All experiments were approved by the Institutional Animal Care and Use Committee at New York University Medical Center. Details about of surgery, locations of the probes, and recordings are available in ref. and in the SI Appendix, Materials and Methods . The takeoff during jumping was determined by the peak of the horizontal acceleration, and the landing was determined by the peak of the negative horizontal acceleration. Wait time was determined by the time spent at a speed less than 9 cm/s before the jump takeoff. Theta phase was extracted by filtering the LFP signal with a third order Butterworth bandpass filter (6 to 13 Hz), then applying the Hilbert transform to extract just the phase. Circular deviance, the circular analog of SD, was measured across each time bin. Place fields were determined as described in ref. . Jump-specific cells were identified by eye. Circular–linear correlations, circular deviance, and the Rayleigh test were performed using the CircStat toolbox for circular statistics . Supplementary File Supplementary File
Implementation of a Multi-Disciplinary Team and Quality of Goals of Care Discussions in Palliative Surgical Oncology Patients
a206326c-263d-4bb2-99cf-bdaf0fc70c6c
10625938
Internal Medicine[mh]
Study Design and Population We performed a single-center prospective cohort study of advanced cancer patients who received palliative interventions (i.e., surgery, endoscopic or interventional radiologic procedures) at the Division of Surgery and Surgical Oncology of the Singapore General Hospital between October 2019 and March 2022. This study was approved by the SingHealth Centralized Institutional Review Board, and informed consent obtained before enrolment of patients. Intervention: MD-PALS Team The MD-PALS team, assembled in January 2021, included members from surgical oncology, medical and radiation oncology, palliative care physicians, gastroenterologists, interventional radiologists, advanced practice and specialty nurses, nutritionists, psychologists, and medical social workers. Fortnightly meetings were held by the MD-PALS team to discuss all patients receiving palliative interventions from January 2021 onward. Before assembly of the MD-PALS team, specialist consultations from the respective members of the group were requested on an ad hoc basis by the primary team caring for the patient. Advanced cancer patients who received palliative interventions between October 2019 and December 2020 were in the pre-MD-PALS group, whereas those admitted between January 2021 and March 2022 were in the post-MD-PALS group. Outcome Measure: Quality of Discussions on GOC The primary outcome of the study was the quality of GOC discussions, which was measured using a four-point composite score as follows: 1 (intent and expected benefits and morbidities associated with palliative surgery or other interventions), 2 (conveyance of overall prognosis), 3 (consideration of patients’ priorities and goals of treatment), 4 (determination of code status). A score of 0 or 1 was assigned if the aforementioned components were respectively absent or present from clinical documentation, with a maximum possible GOC discussion quality score of 4 points. These components were reviewed and considered essential in GOC discussions during a consensus meeting held among the MD-PALS team members. Communication documentation review was performed by two physicians (J.J.Y.S. and J.S.M.W.) independently, and conflicts were resolved by a third senior author (C.S.M.C.). We also reviewed the number of consultations by specialist palliative care physicians and other members of the MD-PALS team during each patient’s index surgical admission for palliative intervention. Other Study Covariates Data on patient demographics, tumor characteristics, palliative interventions, postoperative or postprocedural complications, length of index admission, systemic chemotherapy or radiotherapy, and hospital readmission were collected via manual chart review of electronic medical records at the index surgical admission for palliative intervention. Statistical Analysis Continuous characteristics were compared using the two-sample t test, whereas categorical characteristics were compared using Fisher’s exact test. Overall survival (OS) was measured from the date of palliative intervention to the date of death. Alive patients were censored as of 11 August 2022, when data were cut off for analysis, and no patients were lost to follow-up evaluation. The Kaplan-Meier method was used to estimate OS distribution, and the log-rank test was used to compare OS between the two patient groups. Because the longer follow-up duration among the pre-MD-MPALS patients could distort the OS comparison, a sensitivity analysis was performed in which the OS duration of all patients who survived beyond 12 months was censored at 12 months. Mean GOC discussion quality scores between the pre- and post-MD-PALS groups were compared using the two-sample t test, with additional adjustments made for age, Eastern Cooperative Oncology Group (ECOG) performance status, and whether the patient underwent palliative surgery using analysis of covariance. A segmented linear regression model was fitted to evaluate the change in the level and trend of the average quarterly GOC discussion quality score after MD-PALS implementation. Diagnostic checks of the fitted model were performed to ensure that all model assumptions were met. To evaluate the robustness of the results from this interrupted time series (ITS) analysis, a sensitivity analysis based on segmented beta regression of the mean proportion of quality components of GOC discussions achieved by the patients was performed. Exploratory subgroup analyses were performed to examine the differences in GOC discussion quality scores between the pre- and post-MD-PALS groups by type of palliative interventions using logistic regression. Firth’s penalized likelihood estimation method was used for covariates with a complete separation issue. Goodness of fit was assessed based on the Hosmer-Lemeshow test. All statistical tests were two-sided with a 5 % significance level. Analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA) and Stata 16.1 (StataCorp, College Station, TX, USA). We performed a single-center prospective cohort study of advanced cancer patients who received palliative interventions (i.e., surgery, endoscopic or interventional radiologic procedures) at the Division of Surgery and Surgical Oncology of the Singapore General Hospital between October 2019 and March 2022. This study was approved by the SingHealth Centralized Institutional Review Board, and informed consent obtained before enrolment of patients. The MD-PALS team, assembled in January 2021, included members from surgical oncology, medical and radiation oncology, palliative care physicians, gastroenterologists, interventional radiologists, advanced practice and specialty nurses, nutritionists, psychologists, and medical social workers. Fortnightly meetings were held by the MD-PALS team to discuss all patients receiving palliative interventions from January 2021 onward. Before assembly of the MD-PALS team, specialist consultations from the respective members of the group were requested on an ad hoc basis by the primary team caring for the patient. Advanced cancer patients who received palliative interventions between October 2019 and December 2020 were in the pre-MD-PALS group, whereas those admitted between January 2021 and March 2022 were in the post-MD-PALS group. The primary outcome of the study was the quality of GOC discussions, which was measured using a four-point composite score as follows: 1 (intent and expected benefits and morbidities associated with palliative surgery or other interventions), 2 (conveyance of overall prognosis), 3 (consideration of patients’ priorities and goals of treatment), 4 (determination of code status). A score of 0 or 1 was assigned if the aforementioned components were respectively absent or present from clinical documentation, with a maximum possible GOC discussion quality score of 4 points. These components were reviewed and considered essential in GOC discussions during a consensus meeting held among the MD-PALS team members. Communication documentation review was performed by two physicians (J.J.Y.S. and J.S.M.W.) independently, and conflicts were resolved by a third senior author (C.S.M.C.). We also reviewed the number of consultations by specialist palliative care physicians and other members of the MD-PALS team during each patient’s index surgical admission for palliative intervention. Data on patient demographics, tumor characteristics, palliative interventions, postoperative or postprocedural complications, length of index admission, systemic chemotherapy or radiotherapy, and hospital readmission were collected via manual chart review of electronic medical records at the index surgical admission for palliative intervention. Continuous characteristics were compared using the two-sample t test, whereas categorical characteristics were compared using Fisher’s exact test. Overall survival (OS) was measured from the date of palliative intervention to the date of death. Alive patients were censored as of 11 August 2022, when data were cut off for analysis, and no patients were lost to follow-up evaluation. The Kaplan-Meier method was used to estimate OS distribution, and the log-rank test was used to compare OS between the two patient groups. Because the longer follow-up duration among the pre-MD-MPALS patients could distort the OS comparison, a sensitivity analysis was performed in which the OS duration of all patients who survived beyond 12 months was censored at 12 months. Mean GOC discussion quality scores between the pre- and post-MD-PALS groups were compared using the two-sample t test, with additional adjustments made for age, Eastern Cooperative Oncology Group (ECOG) performance status, and whether the patient underwent palliative surgery using analysis of covariance. A segmented linear regression model was fitted to evaluate the change in the level and trend of the average quarterly GOC discussion quality score after MD-PALS implementation. Diagnostic checks of the fitted model were performed to ensure that all model assumptions were met. To evaluate the robustness of the results from this interrupted time series (ITS) analysis, a sensitivity analysis based on segmented beta regression of the mean proportion of quality components of GOC discussions achieved by the patients was performed. Exploratory subgroup analyses were performed to examine the differences in GOC discussion quality scores between the pre- and post-MD-PALS groups by type of palliative interventions using logistic regression. Firth’s penalized likelihood estimation method was used for covariates with a complete separation issue. Goodness of fit was assessed based on the Hosmer-Lemeshow test. All statistical tests were two-sided with a 5 % significance level. Analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA) and Stata 16.1 (StataCorp, College Station, TX, USA). The study enrolled 44 (34.9 %) patients in the pre-MD-PALS group and 82 (65.1 %) patients in the post-MD-PALS group. The median follow-up period was 27.1 months in the pre-MD-PALS group and 10.6 months in the post-MD-PALS groups. The two groups did not differ significantly in terms of clinical and demographic characteristics (Table ). The most common indication for consideration of palliative interventions was intestinal obstruction. After intervention, the pre- and post-MD-PALS groups did not differ significantly in hospital readmissions (50.0 % vs 47.6 %; P = 0.853), and the median overall survival (OS) durations were respectively 9.2 and 5.8 months ( P = 0.410). The mean GOC discussion quality score was significantly higher in the post-MD-PALS group than in the pre-MD-PALS group (1.34 vs 2.64; P < 0.001; Table ). After adjustment for age, ECOG performance status, and whether the patients received palliative surgery, the mean GOC discussion scores remained significantly higher in the pre-MD-PALS group than in the post-MD-PALS group (1.34 vs 2.61; P ≤ 0.001). The proportion of patients who received quality discussions on goals of surgery, prognosis, priorities and preferences of treatment options, and code status increased significantly after MD-PALS implementation (all P < 0.05; Table ). Subgroup analysis of GOC discussion quality scores by palliative intervention types did not show differences for the patients who underwent surgical, endoscopic, or radiologic interventions (Table S2). The interrupted time series (ITS) analysis showed a significant increase in the average quarterly GOC discussions quality score by 1.93 points (95 % conficence interval [CI], 0.96–2.90; P = 0.003) after MD-PALS implementation (Fig. ). The trend of the average quarterly GOC discussions quality score however remained the same as that during the pre-MD-PALS era. The conclusions on the changes in the level and trend of the average GOC discussions quality score after MD-PALS implementation based on the sensitivity ITS analysis were similar to those of the original ITS analysis. Details of this sensitivity ITS analysis are presented in Fig. S1 and Table S1. A higher proportion of the post-MD-PALS patients than the pre-MD-PALS patients had specialist palliative care consultations (41.5 % vs 31.8 %; P = 0.339) and inputs from the various members of the multi-disciplinary teams (84.1 % vs 81.8 %; P = 0.804) during their index admission for palliative interventions, although these differences did not reach statistical significance (Table ). Overall, findings from our study suggest that the implementation of a multi-disciplinary palliative intervention team (MD-PALS) and the involvement of the various specialist providers in the care of palliative surgical oncology patients improved one aspect of palliative care delivery: the quality of GOC discussions conducted. This may be attributable to increased collaboration and a ready exchange of clinical information as a result of the MD-PALS platform. The primary surgical oncologist, informed by MD-PALS members, can now better identify key EOL issues surrounding the palliative surgical patient and is well-placed to conduct quality GOC discussion during a surgical admission. In previous efforts to improve the quality of GOC discussions, investigators have evaluated the use of didactic lectures, one-on-one training, or even role-playing with standardized patients. – Although these are important in equipping clinicians with the skills necessary to conduct GOC discussions, their effectiveness may be limited by varying levels of proficiency among the members of a multidisciplinary team and differences in opinions regarding the most goal-concordant management plan. In our anecdotal experience, it is not uncommon for patients to have encounters with various members of the multidisciplinary team, with a resulting lack of clarity in terms of their GOC and management plan. Therefore, the MD-PALS team was formed at our institution to build a “Community of Practice” aimed at improving palliative care processes, enhancing interdisciplinary collaboration, and streamlining communication with patients and their families. Increased involvement of specialist palliative care physicians in the care of palliative surgical oncology patients during the post-MDPALS era (31.8 % vs 41.5 %) also likely contributed to the improvements in the quality of GOC discussions. A randomized controlled trial by Vanbutsele et al. demonstrated that early and systematic integration of palliative care improves the quality of life and is more beneficial for patients with advanced cancer than palliative care consultations offered on demand. The authors concluded that the results may be attributable to the different focus of treatment offered by the palliative team compared with the oncology team. In our experience, the specialist palliative care team at our institution contributed significantly in terms of integrating the recommendations of MD-PALS and facilitating the GOC discussions with the patients and families. The current study was limited by the relatively small sample and the unequal cohort sizes of the pre- and post-MD-PALS groups. We hypothesized that the smaller pre-MD-PALS cohort was likely due to changes in workflows and elective hospital admissions during the COVID-19 pandemic. – Because the pre-MDPALS era coincided with the outbreak of COVID-19 in Singapore, the lower proportion of patients and families receiving GOC discussions also may be explained by isolation and visiting policies. In addition, there is no existing international consensus on what constitutes quality GOC discussions, and our MD-PALS consensus definition of the four essential GOC components informed by literature and our local clinical experience may limit the generalizability of this study. – In conclusion, the implementation of an MD-PALS team improved the quality of GOC discussions among palliative surgical oncology patients. These findings provide an opportunity and direction for future studies on improving the quality of palliative care of advanced cancer patients. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 14 kb) Supplementary file2 (DOCX 13 kb) Supplementary file3 (DOCX 17 kb) Supplementary file4 (JPG 17 kb)
Isolated MLH1 Loss by Immunohistochemistry Because of Benign Germline
b5fd9d1c-03fe-44dd-90f5-b14983bc6338
9489174
Anatomy[mh]
Mismatch repair (MMR) proteins MLH1, PMS2, MSH2, and MSH6 recognize and correct errors caused by DNA polymerase during replication. Loss-of-function variants in one of these four proteins result in the accumulation of single-nucleotide variants, insertions, and deletions, particularly in microsatellites, repetitive regions of DNA. , Tumors with this phenotype are termed microsatellite unstable (MSI) or MMR-deficient (dMMR); those that retain the ability to correct errors are microsatellite stable (MSS) or MMR-proficient. MMR/MSI status can be assessed using two methodologies: (1) immunohistochemistry (IHC) or (2) molecular MSI testing of extracted DNA, either by polymerase chain reaction (PCR) of standard microsatellite loci (eg, Bethesda or pentaplex panels) or by next-generation sequencing (NGS). - NGS methods have potential added benefits of detecting mutations in MMR genes and/or assessing tumor mutational burden (TMB), if appropriately validated. Concordance between methodologies to detect dMMR status are high, generally > 90%; however, discrepancies do arise. , These inconsistencies may result from preanalytical variables, including low tumor cellularity, neoadjuvant treatment, improper fixatives, tumor heterogeneity, functional nonsynonymous mutations, or rarely germline polymorphisms. , False-positive MMR IHC has also been described in neoplasms with high mutational burden and POLE mutations. CONTEXT Key Objective Mismatch repair protein immunohistochemistry (IHC) is increasingly used to inform therapy selection, as well as heritable cancer syndrome testing. This case series evaluated causes of discrepancy between mismatch repair IHC and microsatellite instability testing by next-generation sequencing. Knowledge Generated Two uncommon variants of MLH1, p.V384D and p.A441T, were found to interfere with MLH1 immunohistochemistry, producing the appearance of lost expression, which conflicted with negative microsatellite instability testing. IHC interference was associated with loss of heterozygosity at the MLH1 locus and was anti-MLH1 antibody clone-dependent (G168-15). In the index case, MLH1 IHC interference led to unnecessary germline genetic testing and affected selection for immunotherapy. Relevance Rare germline polymorphisms can result in incorrect IHC results, potentially affecting selection of optimal therapy and the decision to pursue germline testing. This case further highlights the need for expert molecular pathologic review and communication between clinical and molecular oncology teams. MMR IHC and MSI testing have long been used for colorectal and endometrial carcinomas as a screening test for inherited deleterious alterations in MMR genes, which results in Lynch syndrome, previously called hereditary nonpolyposis colorectal cancer (HNPCC), and accounts for approximately 5% of colorectal carcinomas (CRCs). Patients with Lynch syndrome are also at increased risk for neoplasms of the endometrium, upper gastrointestinal tract, pancreaticobiliary system, urinary tract, prostate, ovaries, and brain. Somatic dMMR and MSI are also encountered in a variety of neoplasms. For instance, approximately 15% of sporadic colon carcinomas are dMMR, most commonly because of hypermethylation of the MLH1 promoter or double somatic mutations in MLH1 or other MMR genes. , MMR status in sporadic neoplasms may also have prognostic significance, as exemplified by longer overall and disease-free survival in patients with dMMR CRC. Additionally, dMMR/MSI is emerging as an important therapeutic marker, most prominently in predicting response to immune checkpoint inhibitors. Current evidence supports use of immune checkpoint inhibitors as first-line therapy for advanced dMMR or MSI CRC. Notably, pembrolizumab is US Food and Drug Administration–approved for any dMMR/MSI neoplasms, regardless of histologic type or site of origin. , As a result, laboratory testing for MMR deficiency is increasingly performed to help direct treatment decisions. Given the importance of MMR/MSI status in selecting patients for additional germline testing, and its role in prognostication and therapeutic selection, pathologists and clinicians should understand factors that might result in improper dMMR classification. We describe a case of false loss of MMR IHC in a patient with metastatic CRC, caused by IHC interference because of a rare benign germline polymorphism in MLH1 with loss of heterozygosity (LOH). Further investigation identified a series of systematic false loss of MLH1 IHC in a series of clinical cases because of MMR IHC interference. These findings highlight the utility of a comprehensive approach in determining MMR status, and integrating evaluation of MMR genes with assessment of MSI and TMB, with expert molecular pathologist interpretation. Key Objective Mismatch repair protein immunohistochemistry (IHC) is increasingly used to inform therapy selection, as well as heritable cancer syndrome testing. This case series evaluated causes of discrepancy between mismatch repair IHC and microsatellite instability testing by next-generation sequencing. Knowledge Generated Two uncommon variants of MLH1, p.V384D and p.A441T, were found to interfere with MLH1 immunohistochemistry, producing the appearance of lost expression, which conflicted with negative microsatellite instability testing. IHC interference was associated with loss of heterozygosity at the MLH1 locus and was anti-MLH1 antibody clone-dependent (G168-15). In the index case, MLH1 IHC interference led to unnecessary germline genetic testing and affected selection for immunotherapy. Relevance Rare germline polymorphisms can result in incorrect IHC results, potentially affecting selection of optimal therapy and the decision to pursue germline testing. This case further highlights the need for expert molecular pathologic review and communication between clinical and molecular oncology teams. Case Selection and Clinicopathologic Analysis A morphomolecular discrepancy was identified in a clinical case submitted for NGS as part of an institutional review board–approved study designed to comprehensively evaluate gastrointestinal malignancies for single nucleotide variants, indels, fusions, and MSI/TMB status. This patient (indicated as case 1 in Table ) provided informed consent for publication. Following institutional review board protocol approval at the University of Washington, cases were selected by searching institutional laboratory information systems (PowerPath, University of Washington Medical Center (UWMC) genetics database) for neoplastic specimens with concurrent MMR IHC, MSI testing, and/or MMR gene sequencing. This search yielded two additional cases with discrepancy between MLH1 IHC and microsatellite status, and another with a candidate MLH1 IHC-interference variant and LOH (four cases total). All available histologic slides, immunohistochemical stains, and diagnoses were reviewed by board-certified anatomic pathologists (M.M.Y. and D.E.B.), and molecular data were reviewed by board-certified molecular pathologists (E.Q.K. and V.A.P.). Clinicopathologic demographics, including age, sex, diagnoses, and treatment data, were gathered from the electronic health record. MMR IHC MMR IHC was performed at UWMC and referring institutions; laboratories and antibodies used are listed in Table . At UWMC, IHC was performed on 4-μm thick unstained slides from formalin-fixed paraffin-embedded (FFPE) tissue blocks using an automated platform (BOND-III; Leica Microsystems, Buffalo Grove, IL). Following deparaffinization and rehydration, slides were rinsed and incubated with the primary antibody, washed in buffer, followed by incubation with a peroxidase-labeled polymer (BOND Polymer Refine; Leica). Bound antibody was localized via a peroxidase reaction with 3,3′-diaminobenzidine tetrahydrochloride (DAB+; Dako, Carpinteria, CA) as chromogen. Slides were washed in water, counterstained using hematoxylin, dehydrated, and mounted. Positive controls were performed to evaluate for appropriate staining. All immunohistochemical testing was performed in Clinical Laboratory Improvement Amendments–certified clinical laboratories. Targeted NGS Molecular characterization was performed on one of two DNA-based targeted next-generation sequencing panels, as previously described. , In brief, DNA was extracted from FFPE tissue using the Qiagen GeneRead DNA FFPE Kit (Qiagen, Valencia, CA) before shearing and library preparation using KAPA HyperPrep reagents (Roche, Wilmington, MA). Prepared libraries were hybridized to a set of custom probes designed to target panels of genes chosen for their relevance in cancer diagnosis, prognosis, and/or treatment (UW-OncoPlex version 6, which targets 340 genes) or their importance in cancer-susceptibility (BROCA, which targets 69 genes). Libraries were then sequenced on Illumina NextSeq500 and HiSeq2500 systems (Illumina, San Diego, CA), and sequences were processed through an automated, custom-designed bioinformatics pipeline developed by the University of Washington NGS Laboratory and Analytics group before analysis by board-certified molecular pathologists (E.Q.K. and V.A.P.). In addition to identifying single-nucleotide variants, insertions and deletions, fusions, and copy-number alterations, these assays detect microsatellite instability and TMB (OPXv6 only). MLH1 Methylation After sodium bisulfite conversion using EZ DNA Methylation-Lightning Kit (Zymo Research, Irvine CA, Cat. No. D5030), tumor DNA was amplified using fluorescence-based, real-time quantitative PCR, as previously described. In brief, the promoter region of MLH1 was amplified using methyl-specific CG-specific PCR primers flanking an oligonucleotide probe with a 5ʹ fluorescent reporter dye (6-FAM) and a 3ʹ quencher dye (BHQ); the housekeeping gene COL2A1 was amplified for normalization of DNA input using the ViiA 7 Real-Time PCR Instrument (Thermo Fisher Scientific, Waltham, MA), and the results were evaluated by a molecular pathologist (E.Q.K.). Consent for Publication The patient discussed as the index case provided informed consent for publication. A morphomolecular discrepancy was identified in a clinical case submitted for NGS as part of an institutional review board–approved study designed to comprehensively evaluate gastrointestinal malignancies for single nucleotide variants, indels, fusions, and MSI/TMB status. This patient (indicated as case 1 in Table ) provided informed consent for publication. Following institutional review board protocol approval at the University of Washington, cases were selected by searching institutional laboratory information systems (PowerPath, University of Washington Medical Center (UWMC) genetics database) for neoplastic specimens with concurrent MMR IHC, MSI testing, and/or MMR gene sequencing. This search yielded two additional cases with discrepancy between MLH1 IHC and microsatellite status, and another with a candidate MLH1 IHC-interference variant and LOH (four cases total). All available histologic slides, immunohistochemical stains, and diagnoses were reviewed by board-certified anatomic pathologists (M.M.Y. and D.E.B.), and molecular data were reviewed by board-certified molecular pathologists (E.Q.K. and V.A.P.). Clinicopathologic demographics, including age, sex, diagnoses, and treatment data, were gathered from the electronic health record. MMR IHC was performed at UWMC and referring institutions; laboratories and antibodies used are listed in Table . At UWMC, IHC was performed on 4-μm thick unstained slides from formalin-fixed paraffin-embedded (FFPE) tissue blocks using an automated platform (BOND-III; Leica Microsystems, Buffalo Grove, IL). Following deparaffinization and rehydration, slides were rinsed and incubated with the primary antibody, washed in buffer, followed by incubation with a peroxidase-labeled polymer (BOND Polymer Refine; Leica). Bound antibody was localized via a peroxidase reaction with 3,3′-diaminobenzidine tetrahydrochloride (DAB+; Dako, Carpinteria, CA) as chromogen. Slides were washed in water, counterstained using hematoxylin, dehydrated, and mounted. Positive controls were performed to evaluate for appropriate staining. All immunohistochemical testing was performed in Clinical Laboratory Improvement Amendments–certified clinical laboratories. Molecular characterization was performed on one of two DNA-based targeted next-generation sequencing panels, as previously described. , In brief, DNA was extracted from FFPE tissue using the Qiagen GeneRead DNA FFPE Kit (Qiagen, Valencia, CA) before shearing and library preparation using KAPA HyperPrep reagents (Roche, Wilmington, MA). Prepared libraries were hybridized to a set of custom probes designed to target panels of genes chosen for their relevance in cancer diagnosis, prognosis, and/or treatment (UW-OncoPlex version 6, which targets 340 genes) or their importance in cancer-susceptibility (BROCA, which targets 69 genes). Libraries were then sequenced on Illumina NextSeq500 and HiSeq2500 systems (Illumina, San Diego, CA), and sequences were processed through an automated, custom-designed bioinformatics pipeline developed by the University of Washington NGS Laboratory and Analytics group before analysis by board-certified molecular pathologists (E.Q.K. and V.A.P.). In addition to identifying single-nucleotide variants, insertions and deletions, fusions, and copy-number alterations, these assays detect microsatellite instability and TMB (OPXv6 only). After sodium bisulfite conversion using EZ DNA Methylation-Lightning Kit (Zymo Research, Irvine CA, Cat. No. D5030), tumor DNA was amplified using fluorescence-based, real-time quantitative PCR, as previously described. In brief, the promoter region of MLH1 was amplified using methyl-specific CG-specific PCR primers flanking an oligonucleotide probe with a 5ʹ fluorescent reporter dye (6-FAM) and a 3ʹ quencher dye (BHQ); the housekeeping gene COL2A1 was amplified for normalization of DNA input using the ViiA 7 Real-Time PCR Instrument (Thermo Fisher Scientific, Waltham, MA), and the results were evaluated by a molecular pathologist (E.Q.K.). The patient discussed as the index case provided informed consent for publication. The index case prompting this series was a 50-year-old male patient, who presented to an outside institution with anorectal bleeding. Colonoscopy demonstrated a rectal mass, and biopsy revealed an invasive adenocarcinoma arising in an adenoma. MMR IHC was performed at the outside institution and showed loss of MLH1 expression (Biocare G168-15 antibody) with intact PMS2, MSH2, and MSH6 by IHC (Fig ). Subsequent MLH1 promoter hypermethylation testing was negative. Staging imaging revealed stage IV disease with direct local invasion of a seminal vesicle and multiple enlarged regional and nonregional lymph nodes. Upon referral to our institution, initiation with systemic therapy was recommended by the multidisciplinary team on the basis of the stage of disease, with the intent to complete a short course of systemic chemotherapy, followed by chemoradiation to the rectal primary and all nodal disease (including the M1a nonregional nodes) and then resection of the primary. On the basis of the apparent dMMR status, treatment with pembrolizumab was initiated. As part of a research protocol to uniformly perform panel-based testing, the initial colonoscopy biopsy was tested for MSI by NGS, revealing MSS status. In addition to revealing low TMB (1 mutation/megabase), the panel also identified ERBB2 amplification (for a complete molecular profile of the tumor, see the Data Supplement). Germline genetic testing was concurrently sent, and ultimately resulted as negative for pathogenic MLH1 variants, although a variant of uncertain significance was noted in PMS2 . Given the discrepancy in this case between NGS sequencing results and MMR IHC, rare germline polymorphisms in the MMR genes were further reviewed. This identified an MLH1 polymorphism (p.V384D, NM_000249.3:c.1151T>A) accompanied by LOH (variant allele fraction [VAF] 0.78). This MLH1 variant, which has been classified as NOT pathogenic on the basis of criteria developed by the InSiGHT Mutation Interpretation Committee, has previously been reported in the literature in association with MLH1 IHC loss. IHC at our institution using a different antibody clone (Cell Marque G168-728) revealed intact expression of MLH1 (Fig ), and the neoplasm was subsequently reclassified as MMR-proficient. For the patient of interest, immunotherapy was discontinued upon recognition of the MMR-proficient status, and treatment with infusional fluorouracil, leucovorin, and oxaliplatin and bevacizumab was initiated. Restaging was not performed after the single cycle of pembrolizumab, but there was minimal change in the carcinoembryonic antigen (109-85 ng/mL) during that first cycle, whereas a more dramatic decrease (85-4 ng/mL) in the subsequent 2 months of cytotoxic chemotherapy was observed, suggesting minimal clinical benefit with immunotherapy. The patient subsequently underwent a low anterior resection with diverting loop ileostomy. Histologic examination of the resection specimen revealed only a single focus (0.3 cm) of moderately differentiated adenocarcinoma with no evidence of lymphatic or perineurial invasion; all evaluated lymph nodes were negative. The patient continues to do well on therapy more than 6 months after his surgical resection, with negative imaging and no evidence of minimal residual disease using the Signatera ctDNA assay. Two additional neoplastic cases harboring the MLH1 p.V384D variant accompanied by LOH were identified from institutional archives. Both cases were MSS with low mutational burdens (Table ). Similar to the index case, MLH1 expression was not detected using the MLH1 G168-15 antibody, but was retained using the G168-728 antibody (Table ). Neither of these patients received immunotherapy or underwent germline genetic testing per medical record review. To identify other nonpathogenic variants with potential MMR IHC interference, we queried our anatomic pathology and molecular databases for neoplastic cases with isolated MMR protein loss and discordant MSS status by NGS testing. One case with apparent loss of MLH1 by IHC at the referring institution but MSS by NGS was found to harbor a different benign germline polymorphism, MLH1 p.A441T, with associated LOH in neoplastic tissue. MLH1 IHC with the G168-728 antibody at our institution demonstrated weak, but intact (retained), expression (Table ). Treatment course for this patient is unknown (reference laboratory testing only). With the increasing utility of MSI testing for clinical and therapeutic decision making, valid and reliable testing is paramount. Herein, we report two germline MLH1 variants, MLH1 p.V384D and MLH1 p.A441T, which appear to interfere with MMR IHC, resulting in isolated false loss of MLH1 expression. The former MLH1 variant is relatively common in some populations, with a maximum allele frequency of up to 0.03, whereas the latter only occurs at a maximum allele frequency of 0.001. The frequency of expected IHC interference by combined presence of a MLH1 p.V384D or p.A441T and LOH is difficult to estimate for several reasons: frequencies of these polymorphisms vary by population, LOH is not uniformly measured or reported in databases, and quantification of the baseline frequency of LOH at the MLH1 locus is confounded by MSI-high neoplasms. In a 7-year period of testing neoplastic tissues with NGS at our institution, MLH1 p.V384D was detected 123 times and associated with possible LOH (VAF > 0.6) in 11 cases (approximately 9%). Multiplying this value by the global minor allele frequency for MLH1 p.V384D (0.00519) crudely estimates this combination of events may occur in 1 of every 2,000 cases. Although this number is only an estimate on the basis of limited data, it does suggest that this type of event is uncommon. Substitution of the neutral hydrophobic amino acid valine at codon 384 for negatively charged aspartic acid is not a conservative change, but this alteration occurs in a poorly conserved region of MLH1 outside of its known functional domains. , Prior studies investigating the functional consequences of the MLH1 p.V384D variant suggested mildly reduced coimmunoprecipitation with PMS2 and weakened β-galactosidase activity in a yeast two-hybrid assay with PMS2. However, Takahashi et al reported the MMR activity of both the p.V384D and p.A441T variants in yeast and in vitro MMR assays to be within their assay's normal limits (defined as 60% or higher). Previous publications describing focal MLH1 loss of expression in association with MLH1 p.V384D did not show increased microsatellite instability, and TMB remained low. These findings are concordant with our own, in which tumors harboring p.V384D were MSS with a low TMB, despite associated LOH. Whether MLH1 p.V384D is associated with carcinoma risk, despite the lack of contribution to MSI status, remains controversial. Multiple case-control style studies have reported enrichment of p.V384D among patients with colon cancer. , How or if this germline allele may contribute to carcinogenesis is unclear, given the nonassociation with familial cancer syndromes, lack of consistent second hit variants in the neoplasms, and lack of association with microsatellite instability. Both MLH1 variants, p.V384D and p.A441T, have been classified as benign (ClinVar IDs: 41632 and 89696). Regardless, this uncommon germline variant is not associated with Lynch syndrome or microsatellite instability. Thus, it still constitutes a false-positive interference with MMR IHC in the context of this study. IHC for MMR proteins is commonly used as a screening test for Lynch syndrome, typically in combination with BRAF genetic testing and/or MLH1 promoter hypermethylation testing to exclude the majority of somatic deleterious variants. Thus, neoplasms with loss of MLH1 by IHC, negative for MLH1 promoter hypermethylation and/or negative for BRAF p.V600E, are recommended for germline MMR gene testing. Following this pathway, our index case was negative for MLH1 promoter hypermethylation, prompting referral for medical genetics to determine the etiology of his MLH1 loss. Tumor NGS was also performed as part of a research protocol to uniformly perform panel-based testing, including evaluation of MSI by NGS. The apparent discrepancy between MLH1 IHC and MSI testing by NGS was identified and repeat MLH1 IHC with a different antibody demonstrated intact expression. Given that MLH1 loss was demonstrably dependent on the antibodies used for MLH1 IHC, we hypothesize that these specific variants disrupt an epitope recognized by a subset of anti-MLH1 antibodies. Since similar IHC patterns were observed for both MLH1 p.V384D and MLH1 p.A441T, ie, apparent loss with antibodies other than G168-728, we predict that these two amino acids contribute to a common epitope. To our knowledge, epitope mapping has not been published for the antibodies used in this study and protein structural data for MLH1 are unavailable for the region spanning p.V384—A441, precluding examination of their spatial relationship. , On the basis of prior studies using G168-728, the MLH1 region containing an epitope can be narrowed to amino acids 321-505. , Exon 16 truncation mutations of MLH1 exhibit retained immunoreactivity with G168-15, indicating that an epitope lies in the amino acid range 1-632. Specific epitopes for other MLH1 antibody clones commonly used in clinical IHC, such as M1 (Ventana) and ES05, are not published. Additional studies are necessary to determine whether p.V384D and p.A441T interfere with these antibody clones. The IHC results of case 4, however (Table ), imply that MLH1 IHC interlaboratory discrepancies are not fully explained by differences in antibody clones. Despite using the same G168-15 antibody, IHC at two different institutions reported conflicting results (intact v lost) during the evaluation of MLH1 IHC for case 4. Alternative explanations for differential detection of these MLH1 variants by IHC include differences in other aspects of MMR IHC tests across laboratories, such as antigen-retrieval approaches, binding conditions, or antibody titer, which may conceivably result in variant-specific loss of MLH1 detection. Ultimately, whether the isolated MLH1 loss observed at the referring institution was attributable to IHC interference or one of the alternative explanations, our findings confirmed that germline evaluation for HNPCC in this patient was unnecessary. Perhaps a more compelling need to understand limitations of MMR IHC is its emerging use in diverse neoplasms as an indication for immune checkpoint inhibitor therapy. - Lack of adequate MMR in carcinoma cells predicts response to immunotherapy, and there is generally high concordance among MMR IHC, PCR-based MSI testing, and NGS-based MSI testing in detecting dMMR. However, false-positive results in any selected modality may result in suboptimal efficacy of immune checkpoint inhibitors and less therapeutic benefit than standard chemotherapies such as infusional fluorouracil, leucovorin, and oxaliplatin for CRCs. , MSI testing by NGS, particularly in the context of panel testing, has the advantage of confirming MSI status with simultaneous detection of MMR gene variants and assessment of TMB. However, cost-effective NGS-based MSI testing is not uniformly available. For those institutions where such testing is unavailable, paired testing with PCR-based MSI may be a useful adjunct to MMR IHC when the underlying etiology of IHC loss is unclear or atypical, although MSI PCR may have limitations in noncolorectal cancers. In the setting of a benign germline variant that causes false loss on MMR IHC testing, the expected pattern would be isolated loss of an MMR protein expression and discordant MSS status. Reflexive testing on a more comprehensive panel might then be advised to determine the source of the discrepancy. Interference with MLH1 IHC by p.V384 and/or p.A441T may be detected using MSS samples with known variants and LOH. Another strategy to mitigate the risk of uncommon variant interference with MLH1 IHC is to maintain a second MLH1 antibody clone for additional testing in cases with isolated loss of MLH1 and/or suspicion of interference. In summary, we present a case series identifying two rare germline polymorphisms that result in false loss of MLH1 by IHC. In at least one case, this finding led to initial selection of suboptimal therapy and pursuit of unnecessary germline genetic testing. Additional interference variants will likely be identified as MMR/MSI evaluation increases in frequency, given the role in this tumor-agnostic biomarker in selecting patients for possible benefit from immunotherapies, and soon-to-be-published College of American Pathologists /ASCO guidelines indicating MMR IHC as the recommended laboratory method for assessing dMMR status for immunotherapy selection. Our findings in these cases highlight the advantages of a comprehensive and integrative approach to determining MMR status, one that integrates expert interpretation and evaluation of MSI, MMR IHC, and TMB status in the context of germline and somatic assessment of MMR gene sequences.
Strengthening Health Systems to Support Children with Neurodevelopmental Disabilities in Fiji—A Commentary
3974c02e-5620-4c0d-b71c-ec1aecf01603
7037281
Pediatrics[mh]
Worldwide, over the last 15 years, there have been marked improvements in the number of children surviving infancy and early childhood in low- and middle-income countries (LMICs), due to social and economic changes and advances in the provision of universal health care . Recognizing this the Global Strategy for Women’s Children’s and Adolescents Health (2016–2030), the pre-eminent global health strategy, has moved beyond survival to broader goals of ensuring that every child is enabled to survive and thrive, reaching their developmental potential. Alongside this, the Sustainable Development Goals (SDGs) launched in 2015 not only have a focus on improved health but health equity and optimal early childhood development, with SDG indicator 4.2.1 in particular relating to the percentage of children under 5 who are developmentally on track in health, learning and psychosocial wellbeing . Key to this global strategy is supporting children with neurodevelopmental disabilities (NDDs). NDDs include, among others language delay, intellectual disability, autism and cerebral palsy (CP) . It is estimated that there are 53 million children with NDDs worldwide with 95% living in LMICs . However, this is likely to be an underestimate due to poor availability of disability data, lack of inclusion of many children with NDDs in society and the stigma children and their families experience . Through their life course, people with NDDs are more likely to have higher levels of morbidity and mortality, complete lower levels of education, be unemployed, and be socially isolated . They are also at increased risk of trauma, abuse and neglect. These adverse outcomes for children with NDDs increase the likelihood of a life lived in poverty, a loss of country productivity and increased health and welfare expenditure costs . If children are to move from surviving to thriving then health systems need to be able to support early identification of NDDs, comprehensive diagnostic assessment and access to high-quality early intervention, preferably in the preschool years, in order promote optimal physical health, socio-emotional and learning outcomes . The importance of early identification and early intervention, targeted according to need, has been highlighted by many global bodies, including through countries adopting the Nurturing Care Framework, launched at the World Health Assembly in 2018 . The World Bank has also published seminal policy guidelines on the importance of a diagnostic work up of children with NDDs in order to inform intervention plans made with children and their families . Translating these global policies and frameworks into action at national and subnational level through health services for children with NDDs is a challenge in Fiji as in other LMICs. Although there is a clear evidence base for supporting LMIC health systems through policy and workforce development to improve neonatal outcomes, and communicable diseases, the evidence for health systems supporting children with NDDs is in its infancy . A particular challenge relates to available human resources within health . The recent Global Survey of Inclusive Early Childhood Development (IECD) and Early Childhood Intervention (ECI) Programs ( https://www.ecdan.org/assets/global-survey-of-iecd-and-eci-programs---2019.pdf ) found that a lack of properly trained and qualified personnel and lack of mentoring, coaching and reflective supervision were key barriers to program quality and translation for children with NDDs . In LMICs, clinicians work in a context where there are stretched resources and competing priorities . For example, paediatricians in the Pacific may be providing services for children with NDDs but also dealing with a Dengue epidemic. In such a situation the acutely unwell child is prioritised over the diagnostic assessment of a child with NDDs. There may also be a lack of awareness of the evidence on the importance of early identification, diagnostic assessment and early intervention for NDDs . In addition, much of the training and assessment approach for NDDs in high-income countries is neither feasible due to limited time or resources, nor in some cases culturally appropriate. Program and external expertise; pre- and in-service training; and networking and collaboration are key to ensure successful IECD and ECI programs . In 2007, the World Health Organization (WHO) identified six “building blocks” of an effective health system—service delivery, health workforce, information systems, access to medicines and technologies, financing, and leadership and governance . The service delivery building block encompasses quality of care and service models that are responsive to need and enable equitable access, while the health workforce building block refers to actions for the planning, training, distribution and monitoring of health workers to enable effective service delivery. Despite a clear policy direction promoting the development of leadership and health systems for children with NDDs , systematic and evidence-based reviews indicate that there is very little in the literature regarding the development of models of care for children with NDDs, particularly in the Pacific Region . In this paper, we present a commentary to describe a ten-year collaborative effort between the Fijian Paediatric Clinical Services Network and developmental paediatric colleagues in Australia to develop a model of care for children with NDDs in Fiji. Fiji is a middle-income Pacific Island country with a population of 838,698 people—of which, approximately one-third are 15 years of age or younger . Fiji consists of 322 geographically spread islands , although approximately 75% of the population now live on the main Island of Viti Levu, with the population particularly concentrated in and around the capital Suva . An estimated one-third of the 300 islands in Fiji are inhabited with many remote rural islands and the interior of the larger islands is difficult to access by road or boat. The three predominant languages are English, the official national language, Fijian and Fijian-Hindi. Dominant religions in Fiji are Christianity, Hinduism and Islam . As with many middle-income countries, child health in Fiji is in an epidemiological transition, with an increasing policy and programming emphasis on children with NDDs and inclusive education . Over the past several decades there have been important reductions in the infant and under 5 mortality and the burden of vaccine preventable disease has decreased, although this remains an important cause of childhood mortality and morbidity . The current neonatal mortality rate is 9–10/1000 births . Supporting children with NDDs is an increasing priority for the Fiji Ministries of Health and Medical Services (MoHMs) and the Ministry of Education (MoE). This was exemplified in 2017 by endorsement of the Pasifika Call to Action for Early Child Development by the Government of Fiji in 2017 . On the 15th of June 2017, the United Nations welcomed Fiji’s ratification of the Convention of the Rights of Persons with Disabilities . Colonial War Memorial Hospital (CWMH) is the tertiary paediatric referral hospital for the Fiji Islands and also acts as the sub regional referral paediatric hospital for the South Pacific. It has a paediatric emergency department, a paediatric intensive care unit with 500 admissions/year and a neonatal intensive care unit with 500 admissions/year. There are 9000 births/year. CWMH has 95 inpatient beds but 120 inpatients at times, such as when there are Dengue, Leptospirosis or Typhoid outbreaks after the wet season. CWMH has an outpatient department which sees an estimated 10 000 children a year. CWMH staff also support and link closely with paediatric teams in the two sub-divisional hospitals of Lautoka and Labasa and the team conduct outreach clinics to the Fiji Islands. The paediatric team at CWMH has a total of 20 paediatric medical staff of varying levels of training experience. CMWH is closely linked to Fiji National University which, in 2019, will have 90 medical graduates. There are 10 visiting specialists who come to Fiji to work with local staff including, as we will describe in this paper, developmental paediatrics. Physiotherapy and dietetic services for children exist at CWMH but are more limited or lacking at other sub-divisional hospitals. Paediatric trainees gain their qualification in paediatrics in Fiji through an initial one-year Diploma of Child Health Program and then a three years Masters in Paediatrics program run by the Fiji National University (FNU). Paediatric trainees in Fiji will often do between one and two years of specialist training in Australia, New Zealand and India in subspecialties. Paediatric staff come from many countries throughout Asia and the Pacific (i.e., Fiji, India, Pakistan, the Philippines, Timor Leste, Tuvalu, Vanuatu, Kiribati, Cook Islands, Nauru, Solomon Islands, Tonga, and Samoa). This means that collaborative partnerships in Fiji have regional implications. In summary, there are high levels of coverage of acute health care services in Fiji. It is in this context that the partnership with two developmental paediatricians from Australia (SW, KM) began. In 2010, when the partnership began, it was estimated that approximately one child a day was presenting to the outpatient department with suspected NDDs. There was a lack of training and confidence amongst local paediatric staff, a lack of assessment guidelines and a lack of clear referral pathways to early intervention for children with NDDs. The model of care developed in this collaboration follows the evidence-based recommended tiered approach to supporting children with NDDs . The developmental paediatric services sits within a broader universal framework of early identification. Our collaboration has supported the establishment of “red flags” for NDDs up to the age of two years on the Fijian Maternal Child Health Card, which is used as part of the universal developmental surveillance system by community nurses. These questions include milestones on mobility, language and self-care. Children identified by this and other services as at risk of NDDs in the Suva-Nausori catchment then come to CWMH and have a clinical review by local paediatric trainees in the outpatient department and at this point may also receive referrals for investigation and early intervention. Other pathways to CWMH include ex-neonatal intensive care/paediatric intensive care patients; children who were self-referred by their parents; children referred by schools; children referred by the local early intervention service. Children with suspected NDDs undergo a diagnostic assessment by our future leaders in developmental paediatrics (local paediatric trainees K.N. and K.T.) with developmental paediatric consultant support (S.W., K.M.) as needed. Children receive a diagnosis and are referred for investigation (including hearing tests, vision tests for all and limited blood tests, neuroimaging as indicated) and early intervention or if school aged, educational support. The processes in the model have evolved organically since 2010. In 2010, S.W. made her first visit and subsequently has been a visiting developmental paediatrician twice a year staying for 3 to 4 days seeing between 13 and 15 children always with a local Fijian paediatric trainee. The clinic has had support of visiting specialists including speech pathologists, occupational therapists, a paediatric neurologist and a paediatric geneticist. To ensure continuity of care and sustainability of the service, each child has an allocated Fijian paediatric consultant. During the clinics, the local paediatric trainees are taught how to undertake a developmental history and examination using the “see one, do one, teach one method” and a standardised history and examination template from SW’s workplace in Sydney. SW initially used the Griffiths Mental Developmental Scale (GMDS) non-verbal scale as a diagnostic assessment tool. However, in the interests of time and sustainability, this was replaced with a tool that could be used quickly by local paediatric trainees, the Ages and Stages Questionnaire (ASQ) . The ASQ was chosen as it is the developmental screening tool now taught in the undergraduate medical curriculum in Fiji and is available in the outpatient department. The Childhood Autism Rating Scale and Diagnostic and Statistical Manual of Mental Disorders (DSM) V are used to support diagnostic work up of children with suspected Autism Spectrum Disorder. To date, 15 paediatric trainees have been trained using this method. They now assess the children independently and present them to the visiting developmental paediatrician as long cases which then meet their training requirements for the Diploma of Child Health and Masters of Paediatrics. Now all of these paediatric trainees, some who are now consultants, are expected to be able to competently assess and manage a child with suspected NDDs. Since 2015, two trainees, K.N. and K.T., have become the nominated leads in developmental paediatrics at CWMH and now have a regular weekly clinic where they see children with suspected NDDs referred from other trainees, community health workers, schools and the local early intervention service. K.N. and K.T. support the development of local trainees and therefore contribute to the development of a sustainable workforce for developmental paediatrics in Fiji and the region. The process is outlined in . Since 2012, a total of 370 children with NDDs have been assessed at CWMH with a marked increase since the local paediatric trainee clinic began (data missing 2010/2011) as outlined in . Children seen have had a wide range of conditions including; language delay, sensorineural deafness, blindness, genetic syndromes, CP, global developmental delay, learning difficulties, intellectual disability, congenital hypothyroidism, and increasingly autism spectrum disorder. The team is arranging ethics clearance to undertake a detailed audit of the clinic and describe the patients seen in more detail. As an example of the type of clinical scenarios addressed, a deidentified case study of a child seen in our model of care is outlined in following paragraphs. X, a 4-year-old male was referred to our local paediatric developmental clinic at CWMH via the Children’s Outpatient Department following parental concerns of unusual behavior and delayed speech with poor social-personal skills. In this clinic, he was seen by a local senior paediatric trainee, where a diagnosis of Autism Spectrum Disorder with severe language delay was made. The diagnosis was made possible through systematic and comprehensive history taking, physical examination with the assistance of existing assessment tools, for example Childhood Autism Rating Scale (CARS) form and Ages and Stages Questionnaires (ASQ). X subsequently underwent blood investigations (full blood count, thyroid function tests), audiometry screening, vision testing with a referral to his local special school for early intervention. Family counselling was undertaken with regards to the diagnosis, prognosis and importance of supportive care. X was also seen with the visiting developmental paediatrician from Australia to reaffirm the diagnosis, as well as to provide X’s parents an opportunity to clarify any doubts on Autism Spectrum Disorder. 3.1. Early Intervention in Fiji A key opportunity and challenge is the current early intervention offered in Fiji. The two key referral services for the developmental paediatric team in Fiji are the physiotherapy department at CWMH and the Frank Hilton Organisation (FHO). The physiotherapy service provides therapy to children with NDDs and a playgroup for children with CP, however staff training in paediatrics generally and specifically in needs for children with NDDs vary substantially. Additionally, aides and assistive devices for children with NDDs are limited as are orthopaedic services. Currently, there are no paediatric speech pathologists, psychologists or occupational therapists employed by the MoHMs nor training courses for these in the three universities in Fiji. The Frank Hilton Organisation (FHO) is a non-government organisation increasingly supported by the MoE and has been operating to provide services for children with NDDs in Fiji for over 50 years . The FHO includes an Early Intervention Centre (EIC) in Suva, in close proximity to CWMH. All children attending the FHO have individual family service plans and over the past year or so the EIC has been beginning to run play-based therapy groups for young children with NDDs. Leadership development for long-term local staff at the EIC is an identified priority for the FHO 3.2. Embedding Research in Service Ddevelopment A key component of our work in Fiji is to build a local evidence base and research partnerships in NDDs. Collaboration between Fiji MoHMs and medical services staff and our group of Australian paediatricians, led by KM recently provided the first long-term neurodevelopmental outcome data for Neonatal Intensive Care Unit (NICU) in Fiji . This research examined the outcomes of children discharged from the CWMH NICU and a matched cohort of non NICU babies to examine the prevalence of moderate to severe NDDs. Being a high-risk neonate, gestational age, birth weight, asphyxia, meningitis and/or respiratory distress were significantly associated with risk of NDDs . Prevalence of NDDs was high among this predominantly term high-risk neonatal cohort compared with controls . These results have informed efforts to strengthen the quality of care and our model and have now led to a pilot trial to improve the health and wellbeing of children with CP and their families. The “Toso Vata” (Moving Together) pilot parent support and early intervention program is a local an adaptation of “Getting to Know CP” , an evidence-based facilitated participatory learning support program for parents. In March 2019, the team met with a group of multisectoral stakeholders to plan its trial which will be completed in December 2019. In 2020, K.N., one of our local leaders, is planning to conduct a mixed methods evaluation with the team’s support of the model of care for children with NDDs. This will include qualitative interviews of caregivers about their experiences attending the clinics and their recommendations for future redesign. We are also planning to interview the paediatric trainees regarding their training experience in this health system. The quantitative component will include a detailed audit of the demographic, clinical and service use characteristics of the children with NDDs who have attended the developmental paediatric clinics over the last 10 years. We will use the evidence from this evaluation to develop approaches to address identified barriers to health care access and to support further training and ongoing professional development of local allied health staff and ancillary services (e.g., mental health, orthopaedics). This evaluation will also be an important Fijian led and much needed contribution to the evidence base for models of care for children with NDDs in LMICs. A key opportunity and challenge is the current early intervention offered in Fiji. The two key referral services for the developmental paediatric team in Fiji are the physiotherapy department at CWMH and the Frank Hilton Organisation (FHO). The physiotherapy service provides therapy to children with NDDs and a playgroup for children with CP, however staff training in paediatrics generally and specifically in needs for children with NDDs vary substantially. Additionally, aides and assistive devices for children with NDDs are limited as are orthopaedic services. Currently, there are no paediatric speech pathologists, psychologists or occupational therapists employed by the MoHMs nor training courses for these in the three universities in Fiji. The Frank Hilton Organisation (FHO) is a non-government organisation increasingly supported by the MoE and has been operating to provide services for children with NDDs in Fiji for over 50 years . The FHO includes an Early Intervention Centre (EIC) in Suva, in close proximity to CWMH. All children attending the FHO have individual family service plans and over the past year or so the EIC has been beginning to run play-based therapy groups for young children with NDDs. Leadership development for long-term local staff at the EIC is an identified priority for the FHO A key component of our work in Fiji is to build a local evidence base and research partnerships in NDDs. Collaboration between Fiji MoHMs and medical services staff and our group of Australian paediatricians, led by KM recently provided the first long-term neurodevelopmental outcome data for Neonatal Intensive Care Unit (NICU) in Fiji . This research examined the outcomes of children discharged from the CWMH NICU and a matched cohort of non NICU babies to examine the prevalence of moderate to severe NDDs. Being a high-risk neonate, gestational age, birth weight, asphyxia, meningitis and/or respiratory distress were significantly associated with risk of NDDs . Prevalence of NDDs was high among this predominantly term high-risk neonatal cohort compared with controls . These results have informed efforts to strengthen the quality of care and our model and have now led to a pilot trial to improve the health and wellbeing of children with CP and their families. The “Toso Vata” (Moving Together) pilot parent support and early intervention program is a local an adaptation of “Getting to Know CP” , an evidence-based facilitated participatory learning support program for parents. In March 2019, the team met with a group of multisectoral stakeholders to plan its trial which will be completed in December 2019. In 2020, K.N., one of our local leaders, is planning to conduct a mixed methods evaluation with the team’s support of the model of care for children with NDDs. This will include qualitative interviews of caregivers about their experiences attending the clinics and their recommendations for future redesign. We are also planning to interview the paediatric trainees regarding their training experience in this health system. The quantitative component will include a detailed audit of the demographic, clinical and service use characteristics of the children with NDDs who have attended the developmental paediatric clinics over the last 10 years. We will use the evidence from this evaluation to develop approaches to address identified barriers to health care access and to support further training and ongoing professional development of local allied health staff and ancillary services (e.g., mental health, orthopaedics). This evaluation will also be an important Fijian led and much needed contribution to the evidence base for models of care for children with NDDs in LMICs. A recent systematic review highlighted that the way forward to provide an integrated evidence-based model of care for children with NDDs in LMICs requires early identification linked to early intervention, a whole family approach and “collaborative child health and development partnerships” . This article has outlined our approach to laying the groundwork for this in Fiji. To further strengthen this model of care, we need to consider the feasibility of our approach in terms of human resource capacity, diagnostic tools that are appropriate to the LMIC context, training needs and reach of early identification and early intervention. In terms of human resources, adequate time to assess and manage children with NDDs (30 to 45 min) versus daily and urgent clinical responsibilities is an ongoing tension. There is also a lack of capacity for a multidisciplinary clinic. This is exacerbated by fact that the countries which have with higher rates of children with NDDs (LMICs) have the lowest number of paediatricians and allied health staff required to assess and treat children with NDDs . Finding feasible and accurate screening and diagnostic assessment tools in the LMIC context is a challenge, as recently outlined in two systematic reviews . Our clinics originally used the GMDS to assess a child’s developmental age, which was done by the visiting developmental paediatrician. Due to time and training constraints it was decided by the team that instead of a diagnostic assessment tool such as the GMDS, the locally available developmental screening tool, the ASQ , would be used. This is done in conjunction with the Childhood Autism Rating Scale and DSM V , as is clinically appropriate. These tools have been validated in LMICs but not for the Pacific region and the ASQ is a screening tool only thus limiting the assessment . There is also a need to strengthen pathways for early identification and intervention with regards to management of NDDs . This entails enhancing community nurse training in child development monitoring with data to monitor referral patterns and gaps; ongoing integration between health system levels and across sectors including education and social services; increased incorporation of data related to child development into the health information system . We need to support approaches to early intervention that are feasible, sustainable and effective in context, have an ongoing supply of assistive devices and provide broader support for parents . There are challenges with accessing early intervention outside of Suva and variation in the support and care in mainstream and special schools. Both the FHO and the MoHMs are strengthening linkages and referral pathways between health and education. CWMH through KT KN and IT are working closely with referrers, physiotherapy and the FHO to develop referral pathways, training and case conferencing. As with all child health services, there are also challenges in reaching and engaging those children who are likely to most need the services who live in squatter settlements, regional areas and remote villages. We have started to address this by extending the training out to the two divisional hospitals in Labasa and Lautoka. For children with NDDs and their families in Fiji, there has been access to high-quality developmental paediatric services. For the Fijian team members, there has been on-the-ground clinical teaching, mentoring and career development support and a meaningful collaboration in further service, policy and research. There has been increasing regional capacity by sensitizing and training regional trainees which has been shown in other middle-income countries to increase knowledge, perceived competence and skills related to assessing children with NDDs . For the Australian team members, there has been significant experience within resource-limited settings, promoting clinical skills, innovation, resourcefulness and efficiencies working with local and regional team to provide services for children with NDDs. This reciprocal benefit from global ways of working has been consistently shown to be a benefit in the literature . If we are to truly meet the global challenge of moving children from surviving to thriving, we must strengthen health systems for children with NDDs by collaboratively developing service delivery models and increasing workforce capacity. Key to this collaboration are developmental paediatric services that are appropriate to context, based on shared knowledge and reciprocal learning between experts and local staff, that are high quality and sustainable. We have shared our experience from Fiji of trust, mutual respect and in-country training, making our model resilient, adaptable and sustainable to meet the needs of children with NDDs and their families.
New views on three-dimensional imaging technologies for glaucoma: an overview
17ae3ad3-bfa6-4422-9336-89e7e58abe61
8826607
Ophthalmology[mh]
Glaucoma is a chronic optic neuropathy, for which there is no cure. Early detection is the key to preserving vision as appropriate treatment can delay progression and prevent irreversible blindness. Historically, the diagnosis of perimetric glaucoma was made when a person had already lost up to 40% of their retinal ganglion cells (RGC). However, there is now a shift towards detecting glaucomatous structural changes before visual field loss as results of the Ocular Hypertension Treatment Study and advances in imaging have shown that changes in structural testing [i.e. disc photography and optical coherence tomography (OCT)] can occur before visual field changes. With OCT technology, these changes include retinal nerve fiber layer (RNFL) thinning, neuroretinal rim tissue thinning, and macular ganglion cell layer thinning. Imaging now has the capability of providing doctors with more data and fewer artifacts, or fewer clinically unusable scans. Additionally, OCT provides data on the surrounding peripapillary vasculature and can also visualize the lamina cribrosa, giving insights into the vascular and mechanical theories of glaucoma. This review will summarize best clinical three-dimensional (3D) imaging technologies and associated 3D volumetric analyses, which will enable us to diagnose and detect 3D glaucomatous structural changes sooner, ultimately enabling earlier treatment and vision preservation. OCT is the most commonly used, noninvasive, in-vivo 3D imaging technology currently used in clinics. It has become a foundational part of the evaluation of glaucoma and retinal diseases, and it can help to objectively quantify disease progression. Images in OCT can be translated into one-dimensional (1D) units of data or length or A-scans. Many A-scans combined together create B-scans, which are two-dimensional (2D) cross-sectional images. Volumetric or 3D images are created when B-scans are combined. During the transition of 2D to 3D imaging of the eye, OCT became an indispensable part of clinical practice. During the past decades, OCT technology has been continuously evolving. First described in 1991, time domain OCT laid the groundwork for in-vivo 2D imaging of the eye . In 2003, 3D volumetric imaging of the eye with video-rate spectral domain OCT (SD-OCT) became possible . Faster-scan speeds allowed for unprecedented video imaging of the posterior pole. Fourier domain OCT and SD-OCT are different terms for the same technology, whose advancement allowed for better sensitivity and higher resolutions . Video-rate SD-OCT propelled the use of OCT machines in almost every ophthalmology office, utilizing a spectrometer and light source wavelengths ranging from 820 to 870 nm for the RTVue and iVue (Optovue Inc., Fremont, California, USA), the Cirrus (Carl Zeiss Meditec, Inc., Dublin, California, USA), and the Spectralis (Heidelberg Engineering, GmbH, Heidelberg, Germany) . Around the same time, another technology advancement called swept source OCT (SS-OCT) was developed . Optical frequency domain imaging, or SS-OCT, was not Food and Drug Administration (FDA) approved for clinical use until 2016 and is not as commonly used in ophthalmology clinics as SD-OCT. However, compared with SD-OCT, SS-OCT has faster-scan speeds and greater depth penetration, allowing for scanning of larger areas and deeper layers. SS-OCT uses a laser light with a center wavelength of approximately 1050 nm [ – , ▪▪ , , ]. Variations of SD-OCT technology include enhanced-depth imaging (EDI), which simply focuses imaging to more posterior structures. Other enhancements may include adaptative optics, which reduces the aberrations caused by the lens in the eye, providing better resolution with reduction of artifacts . Another added feature for SD-OCT and SS-OCT technologies is OCT angiography (OCTA), which visualizes blood vessels [ ▪▪ ]. Although SD-OCT has the ability to measure blood flow , most commercially available OCTA software does not quantify blood flow but simply visualizes the presence or absence of blood vessels. With current software packages, the 3D vascular network in the eye can be visualized without fluorescein angiography or dye, and metrics, such as vessel density can be quantified. With the advent of OCT, the use of invasive fluorescein angiography has been vastly reduced. With current OCTA software, serial images in time are compared, and image differences are attributed to the presence of moving blood cells, which indicate the presence of blood vessels. Therefore, the structural presence of blood vessels can be mapped for different regions, such as the retinal surface, the radial peripapillary capillary network, and the intermediate and deep capillary plexus . This review will focus on structural imaging and not vascular imaging as normative databases are only available for structural and not OCTA parameters. In addition, structural measurements are currently more reliable than OCTA data. Although OCT technology now allows for 3D volumetric imaging of the eye, its full potential is limited by commercially available software, which does not fully analyze the volumetric data obtained by the machines. Therefore, software advances with 3D data analysis are needed to maximize the potential of SD-OCT and SS-OCT technology. Currently, the most commonly used glaucoma parameter is peripapillary RNFL thickness . However, we need to move beyond the RNFL. Future regions of interest that need to be targeted for better glaucoma diagnosis and monitoring of disease progression are not only just the RNFL but also the optic nerve, the peripapillary region, and the macula. The peripapillary RNFL thickness parameter is the most commonly used, commercially available glaucoma parameter in clinical practice , and these 2D RNFL thickness measurements are the result of a single B-scan. Thinning of the RNFL is associated with glaucoma. However, the greatest limitation of the RNFL thickness measurement is that its OCT measurements may be inaccurate because of artifacts in 19.9–46.3% of scans [ , , ▪▪ , , ]. In an analysis of 2313 scans from 1188 patients, Liu et al. defined 12 types of artifacts, which can be caused by poor data acquisition, inaccurate software analysis, or patient ocular disorder . Moreover, the RNFL thickness parameter has high false-positive rates of 26.2–39% (i.e. ‘red disease’ when the OCT machine suggests incorrectly that the patient has glaucoma), in patients with longer axial lengths and smaller disc areas . Additionally, RNFL measurement errors and artifacts are more commonly seen in glaucoma patients , as glaucoma causes not only RNFL thinning but also RNFL reflectivity loss. As the less reflective glaucomatous RNFL is harder to distinguish from the underlying plexiform layer, segmentation of the posterior RNFL border may be more difficult and inaccurate. Because of the high-artifact rates seen with peripapillary RNFL thickness measurements, research to develop better ways to more accurately quantify the RNFL in glaucoma is needed. Peripapillary RNFL volume is perhaps a better future metric for quantifying RNFL tissue in glaucoma than the aforementioned RNFL thickness parameter. In a study by Khoueir et al. , high-density optic nerve volumetric research scans were used to evaluate the diagnostic capability of SD-OCT peripapillary RNFL volume measurements in a multiethnic group of 113 open-angle glaucoma (OAG) patients and 67 normal participants. Employing a custom-designed MATLAB code, RNFL volume for different-sized annuli were calculated from high-density volume scans centered on the optic nerve head (ONH). This study concluded that overall RNFL volume measurements were lower in OAG patients compared with normal patients. Using different annuli sizes of 1 mm width, the best diagnostic capability was found for the circumpapillary annulus (i.e. RNFL-volume 2.5–3.5), which had an inner diameter of 2.5 mm and an outer diameter of 3.5 mm [area under the receiver operating characteristic curve (AUROC) = 0.955]. Of the eight possible RNFL volume sectors (i.e. for 4 quadrants and 4 octant values), the inferior quadrant had the best diagnostic ability (AUROC = 0.977). These results were then compared with the best diagnostic values for 2D RNFL thickness, which were global (AUROC = 0.959) and inferior (AUROC 0.966). Although RNFL volume parameters had similar ability as RNFL thickness to help diagnose glaucoma , a complementary RNFL volume paper shows that this same high-density 3D RNFL volume parameter can reduce the percentage of clinically unusable scans in glaucoma patients to 7.5%, lower than the 58.5% of unusable scans for the 2D RNFL thickness scan [ ▪▪ ]. Therefore, this 3D high-density volumetric scan protocol, which takes only a few extra seconds of scan time , can achieve improved data accuracy [ ▪▪ ] by reducing the number of unusable scans, which would otherwise require repeat scanning. In summary, RNFL volume measurements have the same diagnostic ability as RNFL thickness measurements but provide more data and fewer unusable scans, which would improve future glaucoma care. The key feature of old neuroretinal rim measurements is that rim tissue is only quantified if it lies along a 2D flat reference plane, which varies from 120 to 200 μm above and parallel to the retinal pigment epithelium (RPE). According to a review , best commercially available rim parameters are global and inferior rim area along this 2D flat reference plane. In contrast, the key feature of new neuroretinal rim measurements is that rim thickness is calculated in 3D space and does not use a 2D mathematically flat reference plane . The earliest commercially available reference plane-independent parameter is the low-density Bruch's membrane opening-minimum rim width (BMO-MRW) parameter, which quantifies 360° of rim tissue in 3D space using the cup surface as the inner rim border and the BMO disc border as the outer rim border. Articles suggest that the BMO-MRW is the same as or better than both RNFL thickness and reference plane-dependent rim area for diagnosing glaucoma and monitoring disease progression [ , , ▪▪ ]. Similar to the low-density commercially available BMO-MRW, good diagnostic ability can be achieved with a high-density research scan rim measurement, the minimum distance band (MDB). Both the high-density MDB and the low-density BMO-MRW parameters quantify neuroretinal rim tissue without a mathematically flat reference plane, and Fig. compares how these two parameters are calculated. The scan protocols are different as the MDB uses a high-density 193-line raster scan and the BMO-MRW uses a low-density 24-line radial scan. Although both define the inner rim border as the cup surface, the outer rim border is different. For the MDB, the disc border is the retinal pigment epithelium/Bruchs membrane (RPE/BM) complex. For the BMO-MRW, the disc border is the BMO. Studies have shown that the MDB rim thickness in 3D space has the same or better diagnostic ability compared with both RNFL thickness and 2D neuroretinal rim parameters . Shieh et al. demonstrated that glaucoma patients had significantly thinner MDB values in all quadrants and sectors, compared with control participants ( P < 0.0001). The best AUROC values for glaucoma and early glaucoma patients were the overall global (AUROC 0.969; 0.952), the inferior quadrant (AUROC 0.966; 0.949), and the inferotemporal sector (AUROC 0.966; 0.944) . An advantage of high-density MDB rim measurements in 3D space is that more data is obtained, resulting in fewer unusable scans. Park et al. [ ▪▪ ] showed that the 3D MDB thickness parameter has only 15.8% unusable scans, compared with 2D RNFL thickness with 61.7% unusable scans. Moreover, a longitudinal study demonstrated that MDB rim thickness can detect glaucomatous structural damage 1–2 years earlier than current clinical tests (i.e. RNFL thickness and disc photos) [ ▪▪ ]. This was demonstrated in a cohort of 124 OAG patients followed for an average of 5 years [ ▪▪ ]. An example from this article [ ▪▪ ] shows an OAG patient who had progressive MDB thinning over a 7-year period (Fig. ). For an irreversible blinding disease like glaucoma, treatment initiated 1–2 years earlier has invaluable impact and public health significance. One of the main problems with RNFL measurements is that segmentation of the posterior RNFL border is more inaccurate in glaucoma patients, whose RNFL is less reflective. A possible solution is peripapillary retinal thickness maps, whose posterior border (i.e. the RPE) is easier to accurately segment. This concept was published by Yi et al. , who showed examples where peripapillary retinal thickness maps revealed arcuate defects not seen on RNFL thickness maps (Fig. ) . Peripapillary retinal thickness measurements are different than macular retinal thickness maps, which are available in commercially available software. With this concept of peripapillary retinal thickness measurements, Simavli et al. used the commercially available ETDRS (Early Treatment Diabetic Retinopathy Study) scan circles and centered it over the optic nerve instead of the fovea. For all tested circumpapillary annuli sizes, the peripapillary retina was thinner in OAG patients for all quadrants, compared with normal participants, and best diagnostic regions were located inferiorly . Furthermore, two other studies showed that peripapillary retinal volume measurements [ ▪▪ , ] have excellent performance as a diagnostic tool to detect glaucoma as retinal volume is lower in OAG patients compared with normal patients. The first study used ETDRS scan circles in commercially available software packages and showed that best retinal volume parameters had similar diagnostic ability compared with best RNFL thickness parameters . The second study used research software, specifically designed for glaucoma diagnosis [ ▪▪ ], to calculate 3D peripapillary retinal volume. In this article [ ▪▪ ], customizable scan-circle sizes were used to create different-annuli sizes to evaluate eight possible peripapillary total retinal parameters. The best parameter found was the retinal volume – 2.5–3.5, which is the volumetric annular region centered on the optic nerve, whose inner border is 2.5 mm and outer border is 3.5 mm. A recurring theme when comparing 3D measurements with 2D RNFL thickness scans is that 3D retinal volume measurements had lower artifact rates than 2D RNFL thickness scans (6 vs. 32.2%, P < 0.0001) [ ▪▪ ]. The ganglion cell complex (GCC) constitutes the three innermost retinal layers: the RNFL, the ganglion cell layer (GCL), and the inner plexiform layer (IPL). Although commercial software calculates GCC thickness, future parameters may include GCC volume . Verticchio et al. implemented six different-sized annuli (Fig. ) and evaluated 12 possible macular parameters. This study demonstrated that the best 3D macular parameter (GCC-volume-34, which is the volumetric annular region centered on the macula, whose inner border is 3 mm and outer border is 4 mm) had the same or better diagnostic capability as the 2D RNFL thickness parameter . There are two main theories of glaucoma pathophysiology: the mechanical theory and the vascular theory . The mechanical theory focuses on changes in the lamina cribrosa, which can be visualized with SD-OCT and SS-OCT. In an SS-OCT review, Takusagawa et al. concluded that imaging of the anterior laminar structures is more reliable than imaging of the posterior lamina, whose border is not consistently seen. With progressive glaucoma, there is posterior migration of the anterior laminar insertions, with increased thinning and posterior curvature of the lamina. With glaucoma, focal laminar defects are more common, lamina microarchitecture re-models, and laminar pore size is more variable [ ▪▪ ]. The vascular theory focuses on blood flow abnormalities and vasospasm as a cause of optic nerve damage. A review of OCTA for glaucoma reported capillary-vessel density decreases within the peripapillary nerve fiber layer and the macula in patients with glaucoma [ ▪▪ ]. They also concluded there is a moderate-to-strong association between peripapillary OCTA vessel density and visual field defects and described similar discriminatory ability between peripapillary OCTA and OCT RNFL thickness. Studies reported areas without any visible vascular network in the choroidal or deep layer microvasculature in at least half of glaucoma patients. Lower peripapillary and macular vessel-density and choroidal microvasculature dropout are associated with faster rates of disease progression [ ▪▪ ]. Newer 3D glaucoma parameters, which quantify volumetric data from high-density OCT volume scans may have equal or better diagnostic capability for detecting glaucomatous structural damage compared with the most commonly used, commercially available RNFL thickness parameter. The future of glaucoma OCT imaging should fully utilize SD-OCT's and SS-OCT's imaging abilities by presenting the physician with more data (i.e. 3D data instead of 2D data) and fewer imaging artifacts (i.e. fewer unusable scans). More data with fewer artifacts would not only improve clinical glaucoma care but also help us to better understand the mechanical and vascular theories of glaucoma. None. Financial support and sponsorship Research supported in part by Fidelity Charitable Fund. Conflicts of interest There are no conflicts of interest. Research supported in part by Fidelity Charitable Fund. There are no conflicts of interest.
Health literacy and medication adherence in adults from ethnic minority backgrounds with Type 2 Diabetes Mellitus: a systematic review
bc4f0238-c6b5-404f-9b4a-8723f18788f9
11745004
Health Literacy[mh]
The term health literacy was first introduced in 1970 and the concept has evolved and been redefined continuously since . A recent systematic review exploring the meaning of health literacy has defined it as the ‘ability of an individual to obtain and translate knowledge and information in order to maintain and improve health in a way that is appropriate to the individual and system contexts’(P.7). This ability helps people to make appropriate healthcare decisions, understand health risk behaviours, enhance health outcomes, and reduce the cost of care in ways that benefit their health . Health literacy is now recognised as a social determinant of health, which is responsive to change using interventions . Low levels of health literacy are directly related to poor health outcomes , higher use of emergency services, higher rates of hospitalisations, lower rates of utilising preventive services, increased likelihood of making medication errors , poorer understanding of medication instructions , increased cost of health care , poorer ability to self-care, and a higher risk of mortality . Health literacy skills are influenced by various demographic and social factors including education, socioeconomic status, occupation, income, social support, age, cultural background/ethnicity, language, gender, disability, and race, which act as antecedents of health literacy . Individuals with low education, low income, low socioeconomic background and belonging to ethnic minority backgrounds are at a higher risk of having low health literacy levels and often experience barriers in accessing health care . Lack of cultural competency among healthcare professionals is also a barrier for individuals from ethnic minority backgrounds to access and utilise healthcare services . It is essential to recognise that these barriers exacerbate health inequities. The link between minority status and health literacy indicates that the most disadvantaged groups often have weaker health-related skills which lead to health disparities . Addressing these disparities is crucial for achieving greater health equity and improving the health status of disadvantaged populations. Ethnic minority groups are at a higher risk of chronic conditions as migration-related stress and changes in lifestyle are critical risk factors in developing chronic conditions, such as Type 2 Diabetes Mellitus (T2DM) and hypertension, in comparison to non-ethnic minority populations . T2DM is a chronic condition defined as having high levels of glucose in the blood, also known as hyperglycaemia . It is a major contributor to other health-related complications such as renal disease, cardiovascular disease, stroke, visual impairment, and lower limb amputation . T2DM is the ninth leading cause of mortality worldwide, attributing to around 1 million deaths annually . In 2021, around 529 million people were affected by T2DM, and the cases are projected to increase to 7,079 per 100,000 globally by 2030 . Major risk factors for T2DM include obesity, physical inactivity, poor diet, ageing, cardiovascular disease, high blood pressure, impaired glucose tolerance, and gestational diabetes . Among ethnic minority populations, leading risk factors that increase the risk of developing T2DM include immigration, genetics, socioeconomic status, and socio-cultural factors . T2DM can be managed by making lifestyle changes including healthy eating habits and daily physical activity . Alongside lifestyle modification, oral anti-diabetic medications, and insulin play a crucial role in diabetes management, consequently, adherence to medications is important in achieving desired health outcomes . Medication adherence is defined as a process in which patients take their medications as prescribed by their healthcare providers . Suboptimal adherence may lead to treatment failure, adverse health outcomes, and undesired medical expenses . Specific to T2DM, improvement in adherence to oral anti-diabetic medications results in better glycaemic control, decreased long-term complication development, and a reduction in health care costs . It is evident that T2DM puts a considerable burden of disease management on patients . Other than cognitive factors such as health literacy, there are demographic factors such as age, gender, race, education level, and income that also have an impact on diabetes medication adherence . Moreover, in ethnic minority groups, low health literacy levels lead to an incomplete understanding of disease and treatment regimens , high chances of misinterpreting medication labels which may influence their attitudes towards medications for diabetes management , which may result in medication non-adherence . Identified factors that affect medication adherence include lack of knowledge of clinical indication; treatment duration or administration timing; lack of knowledge of the consequences of adherence or non‐adherence; and extent of knowledge on medication side‐effects . The broad range of studies conducted across different countries, ages, and with patients with different health conditions have reported that health literacy has a direct impact on medication adherence and have found a statistically significant and positive association between health literacy and medication adherence , while other studies have reported a positive association between health literacy and medication adherence, but do not support a strong association . Yet, there are studies and systematic reviews that have found that there is no direct association between health literacy and medication adherence , but found a significant moderator impact of low health literacy on medication adherence by influencing patients’ medication beliefs . Most studies have generated conflicting and inconsistent results which may be due to such associations only observed for some health conditions and not others . A review of systematic reviews examining the association between health literacy and adherence suggested that evidence on the relationship between health literacy and adherence is relatively weak. Several scoping searches of the literature were conducted to identify existing systematic reviews that have focused on the association between health literacy and medication adherence in patients with diabetes. Thirteen systematic reviews were found from the search , but a thorough assessment revealed a gap in knowledge. None of the systematic reviews focused on people from ethnic minority backgrounds with T2DM (see Appendix ). Moreover, three reviews incorporated the evidence from observational and interventional studies without distinguishing the results based on study design. Most of these reviews conducted searches covering articles published up until 2016, except for one that covered articles published up until 2020 . Furthermore, the methodological quality of previous systematic reviews was appraised using the AMSTAR 2 tool by two independent reviewers (AA and JP), and most reviews were rated as “critically low”. These gaps highlight the need for a high-quality systematic review to examine the association between health literacy and medication adherence focusing on ethnic minority population with T2DM. Therefore, this systematic review aimed to examine the evidence on the association between health literacy and medication adherence in people from ethnic minority backgrounds living with T2DM. This body of work may be used to inform future interventions for improving medication adherence in adults from ethnic minority backgrounds with T2DM. This review is reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) 2020 guidelines (Fig. ) . The protocol of this systematic review was registered with PROSPERO International Prospective Register of Systematic Reviews (2022 PROSPERO CRD42022328346) . The protocol paper of this systematic review was published on the online digital repository platform—Figshare . Inclusion criteria The Population, Exposure, and Outcome (PEO) criteria (Appendix 2) was used to define our inclusion/exclusion criteria. Studies were included if they: Measured health literacy and medication adherence using either a subjective measurement tool or objective measurement tool, or both examined the association between health literacy and medication adherence included samples of at least 50% or more from ethnic minority populations—term used for included population in the study can be ethnicity; ethnic minority; or minority ethnic groups; or race; or specific names of cultural backgrounds such as African, Asian, and Hispanic focused on T2DM and incorporated any study design were published in the English language were available as full-text journal articles were published from inception (earliest available date) until January 2024 Exclusion criteria Studies were excluded if they: focused on Type 1 Diabetes Mellitus or Gestational Diabetes were review articles, editorials, commentaries, or conference abstracts lacked sufficient data on health literacy or medication adherence measures focused on concepts related to health literacy and medication adherence but do not directly measure and analyse the association Information sources The following databases were searched: MEDLINE (Ovid), The Cochrane Library, The Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCO), PsycInfo (EBSCO), and Embase (Ovid). The initial systematic search was performed on 22nd April 2022, an updated search was performed on 23rd Jan 2024. Further, reference lists of all included articles were screened, and a manual search was performed for previous systematic reviews. Search strategy The Population, Exposure of Interest and Outcome (PEO) criteria (Appendix 2) was used to devise the review question and relevant search terms. Search terms included three key terms: health literacy, medication adherence, and T2DM. Ethnic minority search terms were not included because of the broad number of terms used to define ethnic minority people globally. Therefore, people from ethnic minority backgrounds within studies was included as an inclusion criterion during the assessment of articles for full-text eligibility. A combination of keywords and Boolean operators, truncations, phrase searching, and subject headings were used in the search strategy in consultation with a Health Sciences librarian. The search strategy was pre-tested in the MEDLINE (Ovid) database and subsequently tailored to suit the various functions and operators associated with each database. The search strategy from MEDLINE (Ovid) is provided in Appendix 3. Further, the authenticity of the search strategy was tested by searching the inclusion of previously conducted relevant systematic reviews within the final search results. Study selection process Studies identified through the five electronic databases and manual searches were uploaded to the reference manager software Covidence, and duplicates were removed. Articles that met the inclusion criteria were retrieved as full-text and were imported into Covidence for review. Two reviewers (JP and AA) independently assessed the full-text articles for eligibility. Any disagreements were resolved through discussion including a third reviewer (FM or AE). The process of study selection was carried out in accordance with the PRISMA 2020 checklist and presented as a flow diagram (Fig. ). Data collection process and data items A standardised data extraction form was developed and pilot-tested independently by two reviewers (JP and AA). The data that was extracted included first author, publication year, title, country of study, study design, study setting, sample size, participant characteristics, inclusion criteria, data collection methodology, statistical method used for analysis, study outcomes (association between health literacy and medication adherence), confounders identified and adjusted for, and limitations. Data extraction was conducted primarily by two reviewers (JP and AA) independently. FM and AE provided feedback and resolved disagreements if any. For missing data and/or uncertainties, the study authors were contacted for further information a maximum of three times. Assessment of methodological quality Two independent reviewers (JP and AA) assessed the methodological quality of all the eligible studies before their inclusion in the systematic review. A standardised Joanna Briggs Institute (JBI) critical appraisal tool was utilised to evaluate the methodological quality in relation to bias in designing, conducting, and analysis of the study. The JBI critical appraisal tool evaluates studies based on criteria, such as having a clear inclusion criteria, detailed descriptions, validated measurements, confounding factor management, and appropriate statistical analyses, ensuring high methodological quality . No studies were excluded based on risk of bias assessments. Any disagreement between the two reviewers was resolved through discussion or by including reviewer FM and AE where required. Data synthesis The included articles were reviewed in detail and categorised into current evidence on the association between health literacy and medication adherence in adults from ethnic minority backgrounds with T2DM. The included articles were assessed independently by two appraisers (JP and AA) and the results were reported descriptively for the present systematic review. The Population, Exposure, and Outcome (PEO) criteria (Appendix 2) was used to define our inclusion/exclusion criteria. Studies were included if they: Measured health literacy and medication adherence using either a subjective measurement tool or objective measurement tool, or both examined the association between health literacy and medication adherence included samples of at least 50% or more from ethnic minority populations—term used for included population in the study can be ethnicity; ethnic minority; or minority ethnic groups; or race; or specific names of cultural backgrounds such as African, Asian, and Hispanic focused on T2DM and incorporated any study design were published in the English language were available as full-text journal articles were published from inception (earliest available date) until January 2024 Studies were excluded if they: focused on Type 1 Diabetes Mellitus or Gestational Diabetes were review articles, editorials, commentaries, or conference abstracts lacked sufficient data on health literacy or medication adherence measures focused on concepts related to health literacy and medication adherence but do not directly measure and analyse the association The following databases were searched: MEDLINE (Ovid), The Cochrane Library, The Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCO), PsycInfo (EBSCO), and Embase (Ovid). The initial systematic search was performed on 22nd April 2022, an updated search was performed on 23rd Jan 2024. Further, reference lists of all included articles were screened, and a manual search was performed for previous systematic reviews. The Population, Exposure of Interest and Outcome (PEO) criteria (Appendix 2) was used to devise the review question and relevant search terms. Search terms included three key terms: health literacy, medication adherence, and T2DM. Ethnic minority search terms were not included because of the broad number of terms used to define ethnic minority people globally. Therefore, people from ethnic minority backgrounds within studies was included as an inclusion criterion during the assessment of articles for full-text eligibility. A combination of keywords and Boolean operators, truncations, phrase searching, and subject headings were used in the search strategy in consultation with a Health Sciences librarian. The search strategy was pre-tested in the MEDLINE (Ovid) database and subsequently tailored to suit the various functions and operators associated with each database. The search strategy from MEDLINE (Ovid) is provided in Appendix 3. Further, the authenticity of the search strategy was tested by searching the inclusion of previously conducted relevant systematic reviews within the final search results. Studies identified through the five electronic databases and manual searches were uploaded to the reference manager software Covidence, and duplicates were removed. Articles that met the inclusion criteria were retrieved as full-text and were imported into Covidence for review. Two reviewers (JP and AA) independently assessed the full-text articles for eligibility. Any disagreements were resolved through discussion including a third reviewer (FM or AE). The process of study selection was carried out in accordance with the PRISMA 2020 checklist and presented as a flow diagram (Fig. ). A standardised data extraction form was developed and pilot-tested independently by two reviewers (JP and AA). The data that was extracted included first author, publication year, title, country of study, study design, study setting, sample size, participant characteristics, inclusion criteria, data collection methodology, statistical method used for analysis, study outcomes (association between health literacy and medication adherence), confounders identified and adjusted for, and limitations. Data extraction was conducted primarily by two reviewers (JP and AA) independently. FM and AE provided feedback and resolved disagreements if any. For missing data and/or uncertainties, the study authors were contacted for further information a maximum of three times. Two independent reviewers (JP and AA) assessed the methodological quality of all the eligible studies before their inclusion in the systematic review. A standardised Joanna Briggs Institute (JBI) critical appraisal tool was utilised to evaluate the methodological quality in relation to bias in designing, conducting, and analysis of the study. The JBI critical appraisal tool evaluates studies based on criteria, such as having a clear inclusion criteria, detailed descriptions, validated measurements, confounding factor management, and appropriate statistical analyses, ensuring high methodological quality . No studies were excluded based on risk of bias assessments. Any disagreement between the two reviewers was resolved through discussion or by including reviewer FM and AE where required. The included articles were reviewed in detail and categorised into current evidence on the association between health literacy and medication adherence in adults from ethnic minority backgrounds with T2DM. The included articles were assessed independently by two appraisers (JP and AA) and the results were reported descriptively for the present systematic review. Study selection The initial search yielded 6,318 records, which was reduced to 4,407 unique records after removing duplicates and reference searching yielded 166 records. Of the 4,573 studies (total of 4,407 and 166) screened against title and abstract, 51 studies were selected for full-text review. Upon full-text assessment of these studies, seven unique studies were deemed eligible for inclusion in this review (Fig. ) and 44 studies (41 of databases search and 3 of citation search) were excluded because they did not meet the inclusion criteria (Appendix 7). Study characteristics All seven included studies employed a cross-sectional design. The total participant sample sizes across these studies varied from 53 to 408 participants. The included studies were published between 2006 to 2022. The majority of these studies included adults who were aged 18 years or older, but there was one study that only enrolled people aged 30 years or older . The majority of studies were conducted in the United States, except for one study which was conducted in Canada . Most studies included participants who had a sufficient understanding of the English language and were able to communicate in English. However, one study included only Spanish-speaking Hispanic patients who were not fluent in speaking English and two studies provided participants with the option to choose between English and Spanish as their language for data collection . Most of the studies were conducted in primary care clinics and community health centres with the exception of one virtual study, which utilised social media for participant recruitment and data collection (Table ). Data collection was carried out face-to-face in the clinic by the bilingual research assistant for self-reported questionnaires in most studies, except one study that utilised an online platform to collect the survey data . The proportion of ethnic minority participants that make up the samples of the included studies varied. This included studies that solely focused on ethnic minority populations , and studies with 50% or more of participants from ethnic minority populations . The included seven studies adjusted for potential confounders, which included age , gender , education , income , health status , health insurance status , years lived with T2DM , self-efficacy , insulin use , number of medications , number of health conditions, and race/ethnicity . Participant characteristics The mean age of participants from the included studies ranged from 49.4 to 70.0 years. The ethnic minority backgrounds of participants in the included studies were predominantly Hispanic comprising about 42% of total participants across seven studies and African-American comprising about 39% of total participants across seven studies, Asian/Pacific islanders and other ethnic minority groups comprising about 10%, and remaining 9% comprising of white/non-Hispanic population of total participants across seven studies. The proportion of female participants in the included studies varied from 50% to 72.5% and the mean number of years diagnosed with T2DM ranged from 5 to 9.5. Methodological quality All retained studies used a cross-sectional design and have utilised validated tools to measure health literacy and medication adherence. From the included seven studies, five studies addressed seven out of eight items on the JBI Appraisal checklist, and two studies addressed all eight items on the critical appraisal checklist (Table ) (Appendix 6 for detailed checklist). Two studies did not describe the study subjects and the setting in detail and two studies did not use valid and reliable tools to measure the outcome. Retained studies had identified and adjusted for different potential confounders, among which common confounders were variables such as age, gender, education, income, health insurance status, years of T2DM, race, number of medication, and number of illnesses that can have an impact on both exposure and outcome measures, except for one study which did not state the strategies to deal with confounding factors. Health literacy measure The tools used to measure health literacy among the included studies varied, including the 4-item Brief Health Literacy Screening (BHLS) Tool , the 3-item Brief Health Literacy Screen (BHLS) , a single-item literacy screener , the Rapid Estimate of Adult Literacy in Medicine Revised (REALM-R) , the short-form Test of Functional Health Literacy in Adults (s-TOFHLA) , and the single item Newest Vital Sign (NVS) . Medication adherence measure In the included studies, medication adherence was measured using self-reported measures and prescription refill record. These included the 8-item Morisky Medication Adherence Scale (MMAS-8) , 4-item Morisky Medication Adherence Scale (MMAS-4) , the Simplified Medication Adherence Questionnaire (SMAQ) , the medication engagement subscale of the Summary of Diabetes Self-Care Activities questionnaire (SDSCA) and Proportion of Days Covered (PDC) . Association between health literacy and medication adherence Among the seven included studies, three studies solely targeted ethnic minority populations and in the remaining four studies, at least 50% of participants identified as being from an ethnic minority background (Table ). Studies solely focused on ethnic minority populations Of the seven included studies, only three studies targeted ethnic minority populations. One study targeting participants from an African American background observed a significant association between health literacy level and medication adherence ( r = 0.49, p = 0.001) . Two studies that targeted people from Hispanic backgrounds did not find any association between health literacy level and medication adherence even after adjusting for covariates in the analysis . Studies with 50% or more participants from an ethnic minority background Among four studies with 50% or more participants from ethnic minority backgrounds, three studies observed no significant association between health literacy and medication adherence even after adusting for race as a covariate . A study by Fan et al. observed that health literacy was positively associated with medication adherence in the unadjusted bivariate analysis (β = 0.39, SE = 0.19, P = 0.037), but health literacy was not significantly associated with medication adherence after adjusting for covariates (β = 0.33, P = 0.22). The initial search yielded 6,318 records, which was reduced to 4,407 unique records after removing duplicates and reference searching yielded 166 records. Of the 4,573 studies (total of 4,407 and 166) screened against title and abstract, 51 studies were selected for full-text review. Upon full-text assessment of these studies, seven unique studies were deemed eligible for inclusion in this review (Fig. ) and 44 studies (41 of databases search and 3 of citation search) were excluded because they did not meet the inclusion criteria (Appendix 7). All seven included studies employed a cross-sectional design. The total participant sample sizes across these studies varied from 53 to 408 participants. The included studies were published between 2006 to 2022. The majority of these studies included adults who were aged 18 years or older, but there was one study that only enrolled people aged 30 years or older . The majority of studies were conducted in the United States, except for one study which was conducted in Canada . Most studies included participants who had a sufficient understanding of the English language and were able to communicate in English. However, one study included only Spanish-speaking Hispanic patients who were not fluent in speaking English and two studies provided participants with the option to choose between English and Spanish as their language for data collection . Most of the studies were conducted in primary care clinics and community health centres with the exception of one virtual study, which utilised social media for participant recruitment and data collection (Table ). Data collection was carried out face-to-face in the clinic by the bilingual research assistant for self-reported questionnaires in most studies, except one study that utilised an online platform to collect the survey data . The proportion of ethnic minority participants that make up the samples of the included studies varied. This included studies that solely focused on ethnic minority populations , and studies with 50% or more of participants from ethnic minority populations . The included seven studies adjusted for potential confounders, which included age , gender , education , income , health status , health insurance status , years lived with T2DM , self-efficacy , insulin use , number of medications , number of health conditions, and race/ethnicity . The mean age of participants from the included studies ranged from 49.4 to 70.0 years. The ethnic minority backgrounds of participants in the included studies were predominantly Hispanic comprising about 42% of total participants across seven studies and African-American comprising about 39% of total participants across seven studies, Asian/Pacific islanders and other ethnic minority groups comprising about 10%, and remaining 9% comprising of white/non-Hispanic population of total participants across seven studies. The proportion of female participants in the included studies varied from 50% to 72.5% and the mean number of years diagnosed with T2DM ranged from 5 to 9.5. All retained studies used a cross-sectional design and have utilised validated tools to measure health literacy and medication adherence. From the included seven studies, five studies addressed seven out of eight items on the JBI Appraisal checklist, and two studies addressed all eight items on the critical appraisal checklist (Table ) (Appendix 6 for detailed checklist). Two studies did not describe the study subjects and the setting in detail and two studies did not use valid and reliable tools to measure the outcome. Retained studies had identified and adjusted for different potential confounders, among which common confounders were variables such as age, gender, education, income, health insurance status, years of T2DM, race, number of medication, and number of illnesses that can have an impact on both exposure and outcome measures, except for one study which did not state the strategies to deal with confounding factors. The tools used to measure health literacy among the included studies varied, including the 4-item Brief Health Literacy Screening (BHLS) Tool , the 3-item Brief Health Literacy Screen (BHLS) , a single-item literacy screener , the Rapid Estimate of Adult Literacy in Medicine Revised (REALM-R) , the short-form Test of Functional Health Literacy in Adults (s-TOFHLA) , and the single item Newest Vital Sign (NVS) . In the included studies, medication adherence was measured using self-reported measures and prescription refill record. These included the 8-item Morisky Medication Adherence Scale (MMAS-8) , 4-item Morisky Medication Adherence Scale (MMAS-4) , the Simplified Medication Adherence Questionnaire (SMAQ) , the medication engagement subscale of the Summary of Diabetes Self-Care Activities questionnaire (SDSCA) and Proportion of Days Covered (PDC) . Among the seven included studies, three studies solely targeted ethnic minority populations and in the remaining four studies, at least 50% of participants identified as being from an ethnic minority background (Table ). Of the seven included studies, only three studies targeted ethnic minority populations. One study targeting participants from an African American background observed a significant association between health literacy level and medication adherence ( r = 0.49, p = 0.001) . Two studies that targeted people from Hispanic backgrounds did not find any association between health literacy level and medication adherence even after adjusting for covariates in the analysis . Among four studies with 50% or more participants from ethnic minority backgrounds, three studies observed no significant association between health literacy and medication adherence even after adusting for race as a covariate . A study by Fan et al. observed that health literacy was positively associated with medication adherence in the unadjusted bivariate analysis (β = 0.39, SE = 0.19, P = 0.037), but health literacy was not significantly associated with medication adherence after adjusting for covariates (β = 0.33, P = 0.22). The objective of this systematic review was to examine the association between health literacy and medication adherence in individuals from ethnic minority backgrounds who have T2DM. This review highlights critical knowledge gaps in the existing literature, and methodological weaknesses of existing studies. It also highlights the unique challenges faced by ethnic minority groups such as cultural and linguistic barriers. By identifying the areas of insufficient evidence, this review highlights the critical need for further investigation targeting specific populations. Among retained studies, only one study observed a significant association between health literacy level and medication adherence among people from ethnic minority backgrounds, which solely targeted African American population . Most of the studies ( n = 6) were conducted in the United States and in most studies, participating ethnic minority groups were predominantly from African American and Hispanic backgrounds. The methodological quality of the studies ranged from good to fair, with most studies adjusting for socio-demographic variables to minimise the risk of bias due to confounders. The most common covariates being adjusted in all included studies were age, gender, educational level, income, years of T2DM, self-efficacy, number of medications, number of health conditions, and race/ethnicity. Findings across studies included in this systematic review were inconsistent, which could be attributed to several factors. One of the key factors leading to inconsistency in the results is the use of different assessment tools to measure health literacy and medication adherence in people from different ethnicities living with T2DM. Some health literacy measures used in the included studies were self-reported, perception-based (subjective), and some were performance-based (objective) health literacy measures . The included studies assessed different domains of health literacy such as numeracy, information seeking, pronunciation, comprehension, and general literacy. Combining both types of measures can give more accurate results when investigating health literacy and health outcomes rather than using only one type . Moreover, all included studies measured general health literacy using health literacy tools (Appendix 4) rather than diabetes-specific health literacy. Two recent scoping reviews highlighted the diversity of instruments used to assess health literacy in patients with T2DM and observed that these instruments are validated in non-ethnic minority populations only, which are not recommended to be used in ethnic minority populations such as Hispanic and African Americans . Nonetheless, it is pleasing to note that newer health literacy assessment instruments specific to T2DM are being developed and validated worldwide . In terms of measurement of medication adherence, all included studies utilised different assessment tools (Appendix 5). Although all studies used validated instruments to assess medication adherence, one study measured medication adherence by calculating the proportion of days covered for medication. It is noteworthy that instruments utilised by researchers in the included studies of this systematic review measured varying domains such as medication adherence, adherence to self-care activities including diet, physical activity, medication, and medication refill history. Such differences among instruments may lead to varying levels of sensitivity and specificity in measuring medication adherence constructs and therefore the lack of standardisation may lead to differences in the way health literacy and medication adherence were measured across studies. In the studies that did not exclusively focus on participants from ethnic minority backgrounds, the tools were not adapted for individuals who were non-English speakers or not predominantly English- speaking. The lack of culturally or linguistically appropriate tools may have contributed to differing findings between ethnic minority groups and others. Therefore, this makes it difficult to undertake a meta-analysis to pool the evidence from included studies. The cross-sectional design employed in all the included studies is an another common factor that might have contributed to inconsistency in the results, limiting the ability to draw causal inferences from the findings . The findings of this systematic review are consistent with another systematic review by Chima et al. ; although their review findings were not specifically focused on examining differences in the association between health literacy and medication adherence among ethnic minority population groups. Across the seven studies, there were a variety of ethnic minority groups included, and only one study, which involved African Americans, reported an association between health literacy and medication adherence with ethnicity, however, cultural, and linguistic factors were not consistently identified as variables in any studies. Most studies collected data in the English language, with bilingual research staff or interpreters assisting participants with low English proficiency. However, only two studies translated the questionnaire from English to Spanish, and provided the option to the participant to respond in their preferred language. Tools used in the studies lacked cultural and linguistic sensitivity for non-English-speaking populations, a process called cross-cultural adaptation, which involves translating and culturally adapting the tool to ensure relevance in new settings . This meticulous approach guarantees the reliability and validity of the instruments when used in diverse cultural and linguistic contexts . People with low health literacy face challenges in understanding medication labels, dosage instructions and the importance of treatment regimens due to language barriers, low education levels, and acculturation levels in the host countries . The language used in a questionnaire is crucial because, if it is not appropriate for a specific culture, the responses may not accurately reflect an individual’s health literacy and medication adherence . Similar to language barriers, cultural beliefs are also an important factor behind shaping diabetes self-management behaviours, such as medication adherence, physical activity, and diet in ethnic minority populations . The included studies did not focus on cultural beliefs and traditions that strongly influence illness perceptions, adherence to treatment regimens, and willingness to adhere to medications. The studies included a diverse range of ethnic minority populations, who may have different beliefs and practices which may explain the conflicting results. There can be multiple explanations behind non-adherence or low adherence to treatment regimens in people from different ethnicities, including the preference for complementary medicine and traditional remedies over allopathic medicine . Some cultural beliefs support self-care activities that adjunct the therapeutic treatment, on the other hand, some may not support the utilisation of allopathic medicine. Socio-economic disparities intersect with health literacy and medication adherence in people living with T2DM, creating a complex web of interconnected factors that significantly impact the management of T2DM in ethnic minority populations. Although there is some evidence of the association between health literacy and medication adherence, this was inconclusive primarily attributed to variations in the assessment methods for health literacy and medication adherence, as well as the diverse range of ethnic minority groups included across the studies. Addressing cultural beliefs, language barriers, and socio-economic disparities is critical for improving medication adherence and diabetes self-management in ethnic minority populations. There is a need for studies focusing on specific culturally and linguistically diverse (CALD) groups rather than broad categorisations of ethnicity/race. There was one study with 35% African American participants which was excluded from the review due to the low percentage of the target population for this review. In this study, after adjusting for covariates in their multivariate analysis, they reported an association between African American ethnicity and poor medication adherence, but not between health literacy and medication adherence in the African American population . This explains why it is necessary to specifically recruit CALD communities that are not entangled with a predominately white population, and therefore can provide accurate results. It is critical to address the disparities in cultural and linguistic considerations within healthcare research. Economic disparities and limited access to resources exacerbate challenges faced by individuals with lower health literacy, resulting in disparities in understanding and adhering to medication regimens. This complex scenario underscores the inequities in diabetes management, emphasising the need for a more equitable approach to T2DM care within ethnic minority communities. A comprehensive approach, incorporating cross-cultural adaptation and a nuanced understanding of cultural beliefs, is crucial to ensuring that health interventions are accessible, relevant, and effective across diverse populations. Achieving equity in healthcare requires acknowledging and dismantling barriers, whether they be language-related, cultural, or socioeconomic, to ensure that all individuals, have equal access and understanding of vital health information and resources. Further research should investigate these factors insightfully to co-design, implement, and evaluate interventions to improve medication adherence and health literacy among ethnic minority adults living with T2DM. Also, future research should consider the lifestyle or self-management interventions, because they also have significant impact in diabetes management. In addition, population segmentation can play a crucial role in identifying subgroups of ethnic minorities with varying health literacy and medication adherence . Further studies are needed to identify the optimal segmentation frameworks that consider factors such as cultural differences, socioeconomic status, and health literacy levels to ensure effective and equitable healthcare delivery. Implications for policy The findings from this study highlights the gap in existing literature, which necessitates comprehensive and culturally informed strategies to address this gap. The existing literature does not incorporate cultural and linguistic factors in the research and therefore, future research should focus on investigating the relationship between health literacy and medication adherence among ethnic minority populations with a specific focus on cultural and linguistic barriers and utilise validated tools for specific populations. It is important to understand these factors, as this can assist policymakers and health professionals in designing targeted interventions and providing appropriate support and practical advice to ethnic minority people living with T2DM. Advocating for the inclusion of diverse populations in research studies can provide policymakers with a more accurate representation of their experiences and needs. Support from health professionals can have an impact on the health outcomes of those from ethnic minority backgrounds, and it requires health professionals to employ strategies to ensure patients understand the disease process, prevention, and management. This includes using plain language , simple communication , visual aids , and the teach-back method , where clinicians can verify patient understanding and so can improve health literacy and their health outcomes . Staff training in health literacy and culturally safe healthcare practices is crucial . Additionally, availability of easily readable written materials and education about health conditions can help in improving health literacy . This has implications for clinical practice and policymaking, thus policymakers should support the modification of health services environments and the development of policies or frameworks that promote these practices to improve health outcomes for ethnic minority populations with T2DM. Strengths and limitations To the best of our knowledge, this is the first systematic review to explore the association of health literacy and medication adherence in ethnic minority adults with T2DM. An extensive search was conducted in five electronic databases, and a thorough search strategy was developed in consultation with a health science librarian to ensure the inclusion of a wide range of relevant evidence and to reduce the risk of selection bias. The JBI critical appraisal tool was utilised to assess the methodological quality of all included studies, which enhances the credibility and rigour of the review’s findings. Existing systematic reviews were reviewed and identified extensive knowledge gaps in the research focusing on ethnic minority population with T2DM. The cross-sectional design of all the included studies limits the ability to establish causality. A meta-analysis was not possible as the included studies assessed health literacy and medication adherence using different instruments, varying sample sizes, varying percentages of ethnic minority populations, age groups, and the results were not disaggregated by ethnic categories. The heterogeneity in sample characteristics made it difficult to interpret and combine effect sizes across studies. Another limitation of this review is the exclusion of grey literature, which may contribute to publication bias. Only studies published in the English language were included in this review and therefore it is possible that studies in other languages were not included in the review findings. Another limitation is that a diverse range of ethnic minority populations were included in the review, with the most represented ethnic minority groups from African American and Hispanic backgrounds. The findings from this study highlights the gap in existing literature, which necessitates comprehensive and culturally informed strategies to address this gap. The existing literature does not incorporate cultural and linguistic factors in the research and therefore, future research should focus on investigating the relationship between health literacy and medication adherence among ethnic minority populations with a specific focus on cultural and linguistic barriers and utilise validated tools for specific populations. It is important to understand these factors, as this can assist policymakers and health professionals in designing targeted interventions and providing appropriate support and practical advice to ethnic minority people living with T2DM. Advocating for the inclusion of diverse populations in research studies can provide policymakers with a more accurate representation of their experiences and needs. Support from health professionals can have an impact on the health outcomes of those from ethnic minority backgrounds, and it requires health professionals to employ strategies to ensure patients understand the disease process, prevention, and management. This includes using plain language , simple communication , visual aids , and the teach-back method , where clinicians can verify patient understanding and so can improve health literacy and their health outcomes . Staff training in health literacy and culturally safe healthcare practices is crucial . Additionally, availability of easily readable written materials and education about health conditions can help in improving health literacy . This has implications for clinical practice and policymaking, thus policymakers should support the modification of health services environments and the development of policies or frameworks that promote these practices to improve health outcomes for ethnic minority populations with T2DM. To the best of our knowledge, this is the first systematic review to explore the association of health literacy and medication adherence in ethnic minority adults with T2DM. An extensive search was conducted in five electronic databases, and a thorough search strategy was developed in consultation with a health science librarian to ensure the inclusion of a wide range of relevant evidence and to reduce the risk of selection bias. The JBI critical appraisal tool was utilised to assess the methodological quality of all included studies, which enhances the credibility and rigour of the review’s findings. Existing systematic reviews were reviewed and identified extensive knowledge gaps in the research focusing on ethnic minority population with T2DM. The cross-sectional design of all the included studies limits the ability to establish causality. A meta-analysis was not possible as the included studies assessed health literacy and medication adherence using different instruments, varying sample sizes, varying percentages of ethnic minority populations, age groups, and the results were not disaggregated by ethnic categories. The heterogeneity in sample characteristics made it difficult to interpret and combine effect sizes across studies. Another limitation of this review is the exclusion of grey literature, which may contribute to publication bias. Only studies published in the English language were included in this review and therefore it is possible that studies in other languages were not included in the review findings. Another limitation is that a diverse range of ethnic minority populations were included in the review, with the most represented ethnic minority groups from African American and Hispanic backgrounds. Evidence on the association between health literacy and medication adherence in ethnic minority adults with T2DM is weak and inconsistent. All study designs were cross-sectional; therefore, any causal inferences were not possible. To understand this association more clearly in ethnic minority populations and the impact of cultural and linguistic factors, well-designed studies are required. Additional file 1: Appendix 1. Existing Systematic reviews. Appendix 2. Search terms. Appendix 3. Search strategy for all 5 Databases. Appendix 4. Health literacy measurement tools. Appendix 5. Medication Adherence Measurement tools. Appendix 6. Assessment of methodological quality of the retained studies. Appendix 7. Reasons for exclusion of studies. Appendix 8(a). PRISMA Checklist. Appendix 8(b). PRISMA Abstract Checklist. Appendix 9. Data Extraction Form.
Clinical pharmacology strategies in supporting drug development and approval of antibody–drug conjugates in oncology
a81cc178-dc92-4808-ac3f-b5846f2cab13
8110483
Pharmacology[mh]
Antibody drug conjugates (ADCs) are an emerging class of anti-cancer therapeutic agents that combine the antigen targeting specificity and favorable pharmacokinetic properties of monoclonal antibodies (mAbs) with the cytotoxic potential of small-molecule chemotherapeutics . ADCs typically consist of three components, namely a mAb to determine which cells to be targeted, a cytotoxic drug to determine the mechanism of action by which cells are killed, and a chemical linker that attaches these two components together to determine how the drug is released. The mAb component of an ADC enables the ADC to specifically bind to targeted cell surface antigens overexpressed on the tumor cells. Upon binding, the ADCs are internalized and trafficked to lysosomes, from which the cytotoxic drug is released within the cell, thus resulting in the cell death. The use of targeted delivery of highly potent cytotoxic drugs is designed to enhance the antitumor effects of the molecule while minimizing the toxicity in the normal tissues. As of January 2020, nine ADCs have received US Food and Drug Administration (FDA) approval . The first of these, (1) gemtuzumab ozogamicin (Mylotarg®; an anti-CD33 mAb linked to calicheamicin), for the treatment of acute myelogenous leukemia (AML) was approved in 2000 under the FDA accelerated-approval process . In 2010, this agent was voluntarily withdrawn from the market due to confirmatory trials failing to demonstrate clinical benefit and safety concerns . Gemtuzumab ozogamicin was re-approved in 2018 at a sub-fractionated dose of 3–6 mg/m 2 (compared to 9 mg/m 2 at first approval) . Since gemtuzumab ozogamicin’s initial market approval, seven more ADCs were FDA approved: (2) brentuximab vedotin (Adcetris®; an anti-CD30 mAb and monomethyl auristatin E [MMAE] conjugate) for the treatment of Hodgkin lymphoma and systemic anaplastic large-cell lymphoma, (3) trastuzumab emtansine (T-DM1, Kadcyla®; an anti-human epidermal growth factor receptor 2 (HER2) mAb and DM1 [a derivative of maytansine] conjugate) for the treatment of HER2 + metastatic breast cancer (mBC), (4) inotuzumab ozogamicin (Besponsa®, an anti-CD22 mAb and calicheamicin conjugate) for the treatment of adults with relapsed or refractory B-cell precursor acute lymphoblastic leukemia (ALL), (5) polatuzumab vedotin (Polivy®, an anti-CD79b mAb and MMAE conjugate) for the treatment of relapsed or refractory diffuse large B-cell lymphoma (DLBCL), (6) enfortumab vedotin (Padcev®, an anti-Nectin 4 mAb and MMAE conjugate) for the treatment of locally advanced or metastatic urothelial cancer, (7) trastuzumab deruxtecan (Enhertu®, an anti-HER2 mAb and exatecan derivative conjugate) for the treatment of HER2 + mBC, and (8) sacituzumab govitegcan (Trodelvy®, an anti-Trop-2 mAb and SN-38 conjugate) for the treatment of metastatic triple-negative breast cancer . In August 2020, the 9th ADC, namely belantamab mafodotin-blmf (Blenrep®, an anti-BCMA mAb and MMAF conjugate) achieved accelerated approval from FDA for the treatment relapsed and refractory multiple myeloma . These ADCs prove that the therapeutic window of otherwise intolerable cytotoxic drugs can be improved to a therapeutically beneficial level by conjugating it to an antibody. Despite the great success of ADCs, it is worth noting that the therapeutic window for ADCs remains relatively narrow with the maximum tolerated dose (MTD) often reached before ADCs achieve the maximal efficacious dose . As a result, numerous innovative approaches (e.g., site-specific conjugation or novel payloads) have been implemented to further improve the therapeutic window, resulting in the “next-generation” ADCs, many of which are currently tested in clinical development. The current understanding of the mechanism at which ADCs are cleared is through two major pathways: proteolytic degradation and deconjugation . ADC clearance through proteolytic degradation is driven primarily by catabolism mediated by target-specific or nonspecific cellular uptake followed by lysosomal degradation, similar to mAbs. Deconjugation clearance is usually mediated by enzymatic or chemical cleavage (e.g., maleimide exchange) of the linker leading to the release of the cytotoxic drug from the ADC . Once released from the ADC, the cytotoxic drug may be further metabolized, transported, and eliminated via traditional mechanisms applicable to small molecules (see DDI section). Alternatively, ADC catabolism and deconjugation in vivo leads to the formation of multiple different molecular species (e.g., ADC species with different drug antibody ratios [DAR]) and payload-containing catabolites) . The bioanalytical strategy for ADCs thus requires defining the specific analytes of relevance to clinical pharmacology. Although multiple analytes may be quantified following the dosing of an ADC, the clinical importance of the multi-analyte bioanalytical data in context of safety and efficacy remains to be established. With numerous ADCs in clinical development, streamlining the bioanalytical and clinical pharmacology strategy is critical. The clinical pharmacology assessments for ADCs to address scientific and regulatory concerns are summarized in Fig. . The clinical pharmacology of an ADC incorporates elements of small molecule and mAb (large molecule) development strategies. The scope of work is usually unique and dependent on the linker/cytotoxic drug technology, heterogeneity of the ADC, pharmacokinetics (PK) and safety/efficacy profile of the specific ADC in clinical development. Given the structural complexity, multiple analytes were measured across Phase I, Phase II and Phase III clinical studies to characterize the PK of an ADC, including, but not limited to, conjugate, total antibody and unconjugated payload. Intrinsic (e.g., body weight, organ dysfunction) and extrinsic factors (e.g., concomitant medications) likely to impact the PK of an ADC were assessed using diverse approaches (Fig. ). Relationships between exposure to ADC conjugate (and other relevant analytes) and efficacy/safety were assessed in support of the clinical dosing regimen selection using quantitative approaches. This review summarized the unique clinical pharmacology consideration in supporting development and approval of ADCs (Fig. ). The seven approved ADCs are used as specific examples to illustrate the customized approach to clinical pharmacology assessments in their clinical development. Sacituzumab govitecan and belantamab mafodotin-blmf were not included in the summary due to limited clinical pharmacology information available at the time of the review. ADCs incorporate both large- and small-molecule characteristics and are usually present as a heterogeneous mixture of the species differing not only in the number of cytotoxic drugs attached to the antibody, but also in the protein conjugation sites of drug linkage . Furthermore, biotransformations in vivo can lead to additional changes in DARs resulting in dynamically changing mixtures. As a result, unlike mAbs, the heterogeneity of ADCs in vivo makes it critical to measure multiple analytes in clinical trials . These analytes may include, but are not limited to, the following: ADC conjugate (measured as conjugated antibody or conjugated payload), total antibody (TAb, conjugated and unconjugated antibody), unconjugated antibody and unconjugated (free) payload. Conjugated antibody and conjugated payload are the two alternative ways to quantify the ADC conjugate . From the perspective of the antibody, the ADC conjugate can be measured as “conjugated antibody”, namely the concentration of antibody molecules with one or more cytotoxic drugs attached. This bioanalytical method is used to measure serum concentrations of ADC conjugate for brentuximab vedotin, inotuzumab ozogamicin, T-DM1, enfortumab vedotin, and trastuzumab deruxtecan . Alternatively, from the perspective of the payload, the ADC conjugate can be measured as “conjugated drug”, namely as the total concentration of cytotoxic drug that is conjugated to the antibody. Currently, only ADCs with cleavable linker are amenable to the conjugated drug assay. This bioanalytical method is used to measure polatuzumab vedotin given that not all the DAR species can be measured accurately in the conjugated-antibody ELISA assay . In comparison, gemtuzumab ozogamicin does not measure ADC conjugate. Instead, gemtuzumab ozogamicin measured TAb and unconjugated calicheamicin likely due to the availability of bioanalytical techniques at the time of development. Although multiple analytes may be quantified following the dosing of an ADC, the bioanalytical strategy for ADCs requires defining the specific analytes of relevance to clinical pharmacology in the context of safety and efficacy. Most commonly, three analytes, namely ADC conjugate, TAb and unconjugated cytotoxic drug are measured in preclinical and clinical studies to characterize the PK properties of an ADC . Population PK modeling is an important approach to characterize the ADC PK properties and assess the effect of intrinsic and extrinsic factors on ADC PK, and thus guide dose recommendations in specific populations (e.g., geriatric patients or patients with organ dysfunction). Given multiple analytes were measured for an ADC during its clinical development, one of the unique features of population PK for an ADC is that more than one analyte is often included in the population PK model development. ADC conjugate, the main analyte of interest per mechanism of action of ADCs, is the most common analyte included in the population PK model. Additionally, given the high potency of cytotoxic drugs, the potential contribution of unconjugated drug to safety could not be ruled out. Exposure-safety analysis with unconjugated cytotoxic drug has been conducted for the four out of the seven approved ADCs (see exposure–response section). As a result, unconjugated drug analyte is often included in the population PK model in addition to ADC conjugate to understand the PK characteristics of unconjugated drugs after ADC dosing and generate exposure metrics for exposure-response analysis. As shown in Table , five out of the seven approved ADCs include the two analytes in their population PK models. Integrated two-analyte models (i.e., ADC conjugate-unconjugated payload models) were developed for brentuximab vedotin, polatuzumab vedotin, enfortumab vedotin and trastuzumab deruxtecan, while for gemtuzumab ozogamicin population PK model for TAb and unconjugated payload was developed separately and ADC conjugate analyte was not measured clinically . Typically, the ADC conjugate is dosed in the linear range based on the findings of the phase 1 dose escalation study. The population PK model structures for ADC conjugate are usually characterized by a 2- or 3- compartment model with a mixture of linear and non-linear elimination pathways. Notably, three out of the seven ADCs have non-linear time-dependent clearance and all of them target hematological malignancy (Table ). The ADC linear clearance (CL = 1.6–2.5 L/day) and central volume of distribution (Vc = 6.4–6.7 L) are similar for brentuximab vedotin and enfortumab vedotin, the MMAE-containing ADCs that share the same cytotoxic drug and linker but against different targets . Polatuzumab vedotin is not included in the comparison due to apparent non-linear and time-dependent PK . Conversely, T-DM1 and trastuzumab deruxtecan, both of which share the same mAb but with different cytotoxic drugs and linkers, exhibited linear PK with similar CL (0.4–0.7 L/day) and central volume of distribution (~ 3 L) at the clinical approved dose . As expected for small molecules, the unconjugated payloads released from ADCs exhibit faster apparent clearances (> 19 L/day) from circulation with larger apparent central volume of distribution (> 80 L) into extravascular tissues compared to the ADC or TAb. For MMAE-containing ADCs, the MMAE apparent clearance and apparent central volume of distribution is 45–66 L/day and 80–99 L, respectively. The calicheamicin analyte was not characterized in the population PK model for inotuzumab ozogamicin so PK parameters for the payload were not available for comparison, but for gemtuzumab ozogamicin, unconjugated calicheamicin CL/F was 32 L/day and V1/F was 97 L . For trastuzumab deruxtecan, the unconjugated DXd clearance (CL = 19 L/day) was the lowest for the payloads and the central volume was not estimable with data collected so it was fixed to nonclinical data in the population PK model . The covariate effects from body weight (BW) or body surface area (BSA) is consistently identified as one of significant covariates on key PK parameters (i.e., CL and/or Vc) in the final popPK models for all the approved ADCs. The exponential of BW effect on CL ranged from 0.49 to 0.75, thus supporting the BW- and BSA- based dosing strategy for ADCs. It was worth noting that BW and BSA were highly correlated, of these two covariates, BW is usually preferred to be included in the model as it is the simpler measure to obtain. Six out of the seven approved ADCs identified BW as a significant covariate in their population PK model except inotuzumab ozogamicin which included BSA (Table ). Among the seven approved ADCs, five of them utilized BW-based dosing regimen with the two calicheamicin-containing ADCs using BSA-based dosing. Since these agents have relatively narrow therapeutic windows, some of the ADCs (i.e., brentuximab vedotin, enfortumab vedotin) adopted dose capping strategy to further reduce inter-individual variability for the ADC exposure and thus potentially improve the ADC’s safety profiles, particularly for patients with higher BW (i.e., BW > 100 kg) that would achieve higher drug exposure from a weight-based dosing regimen . Notably, no significant PK differences based on age was observed. Some differences in PK parameters with gender was observed, but post-hoc analyses showed it did not have any clinical meaningful effect on ADC exposures and thus did not warrant dosing adjustment based on gender. The impact of extrinsic and intrinsic factors on ADC PK has been discussed previously . Consistent with other biotherapeutics, baseline albumin and disease factors (e.g., tumor burden) were often identified as significant covariates for ADC clearance, however, the magnitude of the effect of these significant covariates on ADC exposure is minimal compared with overall PK variability and therefore the BW- and BSA- based dose without further adjustment for other factors is considered appropriate for ADCs. The general concept that hepatic impairment may not affect therapeutic proteins PK, including mAbs and ADCs, is being challenged with emerging evidence . Recent publication by Sun et al. showed that of 20 mAbs and 4 ADCs with hepatic impairment data, a decrease in exposure of 1 mAb and 2 ADCs were observed in patients with hepatic impairment. Although the mechanism is unknown, Sun et al. proposes worsening of disease associated with hepatic impairment may increase the elimination of therapeutic proteins through increased competition of FcRn binding with other soluble proteins (i.e. albumin) and target mediated drug disposition. In addition, the liver and kidneys play an important role in elimination of the small-molecule component of an ADC, namely cytotoxic drug once it gets released from the ADC. As a result, impairment of the functions of these organs may result in alteration of ADC and/or cytotoxic drug clearance, leading to exposure changes, which may in turn impact the safety and efficacy. This is especially important given ADCs generally have a relatively narrow therapeutic index. Therefore, assessment of the impact of organ dysfunction on the disposition of ADCs to inform appropriate dosing in these patients is an important component of clinical pharmacology strategy for these molecules. Table summarizes the impact of liver and kidney function on ADCs PK and their corresponding dosing recommendation for the seven approved ADCs. It is noted that two alternative approaches were used to characterize the impact of organ dysfunction on ADC PK across the seven approved ADCs (1) a dedicated organ dysfunction clinical study or (2) model-based approach using patients with organ dysfunction across clinical studies to determine the effects of clinical PK. As shown in Table , three out of seven ADCs conducted dedicated hepatic and/or renal impairment clinical studies: brentuximab vedotin, T-DM1, and enfortumab vedotin. While for the remainder of ADCs, modeling and simulation through popPK has been utilized to assess the organ dysfunction subpopulation across clinical studies. The current ADC model-based approach requires that existing clinical studies allow enrollment of patients with organ impairment. ADC conjugate exposure was generally comparable between patients with hepatic impairment and normal hepatic function for most of the approved ADCs, except for brentuximab vedotin and T-DM1 (Table ). For brentuximab vedotin, ADC conjugate exposure (i.e., AUC) decreased by 35% in lymphoma patients with moderate hepatic impairment, and there was only one patient each with mild or severe hepatic impairment . For T-DM1, AUC of T-DM1 conjugate at Cycle 1 in patients with mild and moderate hepatic impairment were approximately 38% and 67% lower than that of patients with normal hepatic function, respectively . Interestingly, the exposure difference was less apparent after repeated dosing with T-DM1 AUC at Cycle 3 in patients with mild and moderate hepatic impairment largely comparable to the patients with normal hepatic function. There was no apparent effect of hepatic impairment on the cytotoxic drug exposure except for brentuximab vedotin (unconjugated MMAE AUC GMR 2.29 for any hepatic impairment vs normal hepatic function) and polatuzumab vedotin (unconjugated MMAE AUC GMR 1.40 for mild hepatic impairment vs normal hepatic function) . The fact that the exposure of unconjugated MMAE was increased by two-to-threefold in moderate hepatic impaired patients resulted in label recommendation for brentuximab vedotin to avoid use in patients with moderate to severe hepatic impairment . The comparable unconjugated DM1 exposure and transient change of T-DM1 conjugate exposure in mild or moderate hepatic impairment lead to label recommendation of no adjustments of the dose of T-DM1 in these patients . Although an increase in unconjugated MMAE exposure for polatuzumab vedotin was observed, based on the exposure-safety relationship established across clinical studies, the increased unconjugated MMAE exposure was not clinically relevant and no adjustment in the starting dose is required for polatuzumab vedotin in patients with mild hepatic impairment . For patients with renal impairment, ADC conjugate and cytotoxic drug PK are comparable for most of the approved ADCs, except for brentuximab vedotin in patients with severe renal impairment (ADC AUC GMR 0.71 and MMAE AUC GMR 1.90) (Table ) . The altered PK in brentuximab vedotin results in label recommendation to avoid use in patients with severe renal impairment . The approach to evaluate organ dysfunction for ADC drug development remains situation dependent, but is trending toward a modeling and simulation approach. The population PK approach is routinely conducted to evaluate the impact of organ dysfunction on the exposure of ADC and its relevant analytes. If there is an impact on the exposure, such an impact on dose recommendation should be assessed in the context of benefit risk assessment and/or exposure–response relationship. In the future, physiologically based pharmacokinetic (PBPK) modeling approach may be used to assess the impact of organ dysfunction on ADC PK once the ADC PBPK model and organ dysfunction patient population is fully established. Assessing drug–drug interaction (DDI) risk associated with ADCs needs to consider both the large- and small-molecule components of the ADC. The cytotoxic payloads, upon release from ADCs, are expected to behave like small molecules and thus may be of concern for enzyme or transporter-mediated DDIs. The FDA and European Medicines Agency (EMA) have issued comprehensive recommendations for in vitro and in vivo studies to evaluate DDI potential for small molecules, but specific guidelines on DDI risk assessment for ADCs have not been issued. Given the relatively high potency and low systemic exposure of cytotoxic payloads, some unique DDI consideration might be needed for ADCs. Different from other molecules, human mass balance study is usually not conducted for most of the approved ADCs (6 out of 7 approved ADCs). Brentuximab vedotin is the only ADC that conducted a clinical excretion study but without complete recovery . Instead, leveraging preclinical ADME data is the main strategy for initial DDI assessment of ADCs. DDIs related to the payload have been extensively evaluated during the clinical development of an ADC. Table summarizes the approaches, key findings and its implication on the drug label of payload-mediated DDIs for the seven approved ADCs, which include four different payloads: calicheamicin, MMAE, DM1, and DXd. Multiple approaches, namely dedicated clinical DDI study, theoretical risk assessment, physiologically based pharmacokinetic (PBPK) model, concomitant medication analysis, and referencing existing DDI data from a previously established ADC were used for DDI risk assessment. Theoretical risk assessment based on the in vitro DDI and clinical data is the most commonly used approach for the 7 ADCs (Table ). Dedicated clinical DDI studies were conducted for two out of the seven ADCs: brentuximab vedotin and trastuzumab deruxtecan. PBPK modeling approach by leveraging available clinical DDI data for the same payload was used to inform DDI risk for polatuzumab vedotin, while exploratory concomitant medications analysis using NCA or population PK of clinical data to evaluate the effect of concomitant medications on payload PK was used for T-DM1. Given low systemic concentrations of released payloads relative to its in vitro K i /IC 50 values of metabolizing enzymes and/or transporters, the risk for a payload to be a perpetrator of metabolizing enzymes and/or transporters is considered to be low. As shown in Table , most of these assessments are based on the theoretical risk assessments using the in vitro DDI and clinical data, which often results in the labeling statement such as, “at clinical relevant concentrations, the payload has no or low potential to inhibit the CYP enzymes and/or transporters”. In vitro studies showed that MMAE and DM1 exhibited time-dependent and/or competitive inhibition of CYP3A with K i values in the micromolar range, however, the systemic levels of MMAE and DM1 released after administration of brentuximab vedotin and T-DM1 at their clinically approved doses are only in the nanomolar range . Consistent with these observations, a dedicated clinical DDI study showed that co-administration of brentuximab vedotin did not affect exposure to midazolam, a sensitive CYP3A substrate . PBPK modeling by integrating the in vitro DDI and clinical data further confirms the low risk of MMAE for being a perpetrator for CYP3A substrates. The prediction results were highlighted in polatuzumab vedotin prescribing information . In contrast, the potential for a released payload to be a DDI victim still exists, which could possibly impact safety as these payloads are highly potent and typically have a narrow or even no therapeutic window. As shown in Table , three out of the four payloads for the approved ADCs are metabolized by CYP3A with the exception of calicheamicin. In the case of calicheamicin, it has been shown that N -acetyl gamma calicheamicin dimethyl hydrazide, the main circulating catabolite, is extensively metabolized, primarily via non-enzymatic reduction of the disulfide moiety, but not CYP enzymes, thus DDI risk for N -acetyl gamma calicheamicin dimethyl hydrazide as a victim of metabolizing enzymes is considered low and no additional assessment was conducted. Dedicated clinical studies were conducted for brentuximab vedotin and trastuzumab deruxtecan to assess the DDI risk for the released payload as a victim. Low magnitude of DDI interaction for MMAE and DXd was observed when co-administration with strong CYP3A inhibitors and inducers. Co-administration of trastuzumab deruxtecan with itraconazole (a strong CYP3A inhibitor) and ritonavir (a dual inhibitor of OATP1B/CYP3A) resulted in an 18% and 22%, respectively, increase in steady-state exposure of DXd . The magnitude of these changes is not considered clinically meaningful. In the case of brentuximab vedotin, co-administration with ketoconazole, strong CYP3A inhibitor, and rifampin, strong CYP3A inducer, increased MMAE exposure by ~ 34% and decreased MMAE exposure by ~ 46%, respectively . As increased exposure to MMAE may increase the risk of adverse reaction, close monitoring of adverse reactions is recommended when brentuximab vedotin is given concomitantly with strong CYP3A inhibitors . Instead of conducting a clinical DDI study, polatuzumab vedotin, an MMAE-containing ADC with the same linker and payload as brentuximab vedotin, adopted a PBPK approach to project the magnitude of DDI with strong CYP3A inhibitors and inducers. The PBPK model was developed using in silico and in vitro data and in vivo ADME and pharmacokinetic data of MMAE and a vc-MMAE ADC and subsequently verified by the clinical DDI data of brentuximab vedotin . The model projections were used to inform polatuzumab vedotin prescribing information. A slightly different approach was used for enfortumab vedotin, another MMAE-containing ADC, where its prescribing information simply refers to the clinical DDI results for brentuximab vedotin. For T-DM1, concomitant medication analysis with the Phase III pivotal clinical study showed that co-medication of CYP3A inhibitors and inducers does not result in any noticeable change in the pharmacokinetics of T-DM1 and DM1 . However, given a dedicated clinical study was not conducted, caution language with detailed instructions was included in the T-DM1 label. Assessing DDI risk associated with the mAb component of ADCs is often relatively rare since DDI involving mAbs are typically limited. Population PK approach is commonly used for such a DDI assessment. Population PK analysis of inotuzumab ozogamicin identified concomitant rituximab treatment as one of the significant covariates on inotuzumab ozogamicin clearance (CL decreased by 16%). Similarly, population PK analysis with polatuzumab vedotin, which is approved for the treatment of r / r DLBCL in combination with rituximab and bendamustine, showed that combination with rituximab had 24% higher acMMAE exposure (ie, AUC) and 37–40% lower exposure of unconjugated MMAE compared to patients receiving single-agent . There was no apparent impact of bendamustine on polatuzumab vedotin and MMAE PK. It is worth noting that the magnitude of DDIs seen with concomitant medications was small and was not considered clinically relevant. Therefore, no dose adjustment is recommended for inotuzumab ozogamicin or polatuzumab vedotin in concomitant treatment with rituximab. In summary, given the complex structure and unique PK characteristics of an ADC, risk-based DDI strategy by integrating both large and small-molecule components of an ADC is warranted to support the clinical development and approval of an ADC. Theoretical risk assessment using in vitro DDI and clinical data should be conducted based on FDA and EMA DDI guidelines. Depending on the level of risk, different approaches may be implemented to further assess the DDI potential. Modeling based approaches such as population PK and PBPK modeling, have become increasingly accepted and used to support DDI assessment and regulatory submission for ADCs. According to the ICH E14 guidelines, it is generally recommended to evaluate the potential of non-antiarrhythmic drugs, such as ADCs to prolong the QT/QTc intervals in clinical development. A thorough QT study for an ADC is usually not feasible due to safety concerns on cytotoxicity of released payloads in healthy subjects and ethical concerns regarding a placebo arm in cancer patients. As an alternative, a clinical study that incorporates many of the key components of the thorough QT study is usually needed for an ADC, especially when there is evidence suggesting that the small-molecule component of the ADC or its catabolites are present in human systemic circulation. Table summarizes the approaches and results of QT assessment for the seven approved ADCs. In general, these seven ADCs did not show clinically meaningful impact on QTc prolongation, which is somewhat expected as the mAb component of the ADC is unlikely to interact with the human Ether-à-go-go-Related Gene (hERG) channel and the low concentrations of circulating payload after ADC dosing is unlikely to inhibit hERG channels in vivo. A dedicated clinical QT study was conducted for four out of the seven ADCs. The study design for brentuximab vedotin, T-DM1, and trastuzumab deruxtecan are similar, which involved a dedicated QT study collecting triplicate 12-lead ECG data with time-matched PK samples in ~ 50 cancer patients at a single dose level (i.e., clinical approved dose for brentuximab vedotin and T-DM1; a dose higher than clinical approved dose for trastuzumab deruxtecan). Gemtuzumab ozogamicin dedicated QT study is still ongoing ( n = 56, NCT03727750). In comparison, inotuzumab ozogamicin, polatuzumab vedotin and enfortumab vedotin adopted a slightly different approach, instead of conducting a dedicated QT study, high-quality triplicate 12-lead ECG and time-matched PK samples were integrated in existing clinical Phase I and/or Phase II studies. Data pooled from one or multiple studies with ~ 17–250 cancer patients were used for QT assessment. It was noted that the majority of the approved ADCs had QT evaluation during cycles 1 and 3 representative of first dose and steady-state kinetics, except for enfortumab vedotin. Due to enfortumab vedotin’s short half-life (3.4 days for ADC; 2.4 days for MMAE) and dosing on Days 1, 8 and 15 of a 28-day cycle (see Table ), triplicate 12-lead ECGs were collected on days 1 and 3 and days 15 and 17 during the first 28-days cycle to capture the QTc effects at first dose and steady-state kinetics, respectively. Regardless of the study approaches, analysis of ECG data from clinical studies typically follows the ICH E14 guidelines. For the seven approved ADCs, QT intervals corrected for heart rate using Frederica’s formula (QTcF) are commonly used in concentration-QTc analysis. Three analytes (i.e., ADC conjugate, total antibody and unconjugated payload) were included in the concentration-QTc analysis for T-DM1, inotuzumab ozogamicin, polatuzumab vedotin while two analytes (i.e., ADC conjugate and unconjugated payload) used for brentuximab vedotin, enfortumab vedotin, trastuzumab deruxtecan (Table ). Overall, QTc risk for ADCs is expected to be low given the mAb component of the ADC and low levels of circulating payloads. Leveraging preclinical and clinical data such as in vitro hERG test, cardiac safety data in animals and the level of circulating payload, is important for developing appropriate ECG strategy in clinical studies. Additionally, ECG monitoring may not be warranted for ADCs with the circulating concentrations of the released payload similar or lower than those established as having no QT effect. Although dedicated QT studies have been conducted for the 4 approved ADCs, increasing trends showed that integrating high-quality ECG monitoring and exposure-QTc analysis to the existing phase I and/or II studies could be an effective way to assess overall risk and meet regulatory submission requirements. Given a relatively narrow therapeutic window of ADCs compared to mAbs, exposure–response (ER) analysis plays a critical role for supporting Phase II/III dose selection, label dose justification and guidance of dose adjustment for ADCs. Gemtuzumab ozogamicin dose is one of the examples highlighting the importance of ER analysis for selecting appropriate dose and schedule. Gemtuzumab ozogamicin was first granted an accelerated approval in 2000 as a monotherapy with dose of 9 mg/m 2 for the treatment of patients with CD33 positive acute myeloid leukemia, however, the sponsor withdrew gemtuzumab ozogamicin from the market in 2011 as the confirmative study failed to demonstrate better efficacy but showed higher rates of fatal hepatotoxicity and veno-occlusive disease (VOD). Exploratory ER analyses of gemtuzumab ozogamicin using data from single agent of 9 mg/m 2 dose showed that the risk for VOD increases as Cmax after first dose of gemtuzumab ozogamicin increases, while exposure-efficacy (i.e., complete remission) relationship, however, was relatively flat for any exposure measure including C max after first dose, indicating a fractionated lower dose may have the potential to reduce the risk for VOD but preserve the efficacy of gemtuzumab ozogamicin. Recent positive study read-out with fractionated dosing of 3 mg/m 2 confirmed the above hypothesis and demonstrated improved clinical benefit with reduced VOD risk, thus leading to the re-approval of gemtuzumab ozogamicin in 2016 . One of unique features of ADC ER analysis which is different from other therapies, is that it requires comprehensive understanding which analyte(s) are the key drive for efficacy and safety due to the complex structure of ADCs. Based on the mechanism of action, ADC conjugate, measured as conjugated antibody or conjugated payload, is generally believed to be the key analyte of interest to drive safety and efficacy for an ADC. However, it is worth noting that released payloads are highly potent and may possibly pose a safety risk, exposures of unconjugated drug are sometimes included in the exposure-safety analysis. Table summarizes the ER results for the seven approved ADC. Among the seven approved ADCs, four of the ADCs, namely brentuximab vedotin, polatuzumab vedotin, enfortumab vedotin and trastuzumab deruxtecan, included both ADC conjugate and unconjugated drug analytes in their ER analyses. A positive exposure-efficacy relationship with ADC conjugate exposure was consistently observed for the four ADCs, however, no apparent or negative exposure-efficacy relationship was observed for unconjugated drug exposure. In comparison, exposure-safety relationships for the four ADCs vary, depending on safety endpoints and analytes used in the analyses. For brentuximab vedotin and enfortumab vedotin, ADC conjugate exposure appeared to correlate better with safety than that of unconjugated drug. In the case of brentuximab vedotin, a positive exposure-safety relationship was observed with ADC conjugate exposure, but not with that of unconjugated drug, while for enfortumab vedotin, positive exposure-safety relationships were observed with exposure of both ADC conjugate and unconjugated drug, but the strengthen of exposure-safety relationship appears to be much weaker for unconjugated drug. For polatuzumab vedotin and trastuzumab deruxtecan, no consistent exposure-safety trends were observed; positive exposure-safety relationships were observed sparsely between some safety endpoints and exposure of ADC conjugate and/or unconjugated payload. For T-DM1, inotuzumab ozogamicin and gemtuzumab ozogamicin, only one analyte was used in their ER analyses. Specifically, ADC conjugate was used for ER analyses of T-DM1 and inotuzumab ozogamicin. For both ADCs, increased conjugate exposure appeared to be associated with improved efficacy (i.e., ORR, PFS, OS). No apparent positive exposure-safety relationship was observed with T-DM1 treatment (i.e., hepatotoxicity and thrombocytopenia), while a positive exposure-efficacy relationship was founded between inotuzumab ozogamicin and some of treatment-related AE (i.e., Grade 3 + thrombocytopenia and HEAB-assessed VOD). Given ADC conjugate was not measured for gemtuzumab ozogamicin, total antibody analyte was used for the ER analysis instead. Together, for most of the seven approved ADCs, the efficacy endpoints appear to correlate best with ADC conjugate compared to that of unconjugated payload. For safety outcomes, while ADC exposures were often correlated with AEs, unconjugated payload exposures may also be important for certain AEs. Total antibody analyte was usually not included in the ER analysis since there is a high correlation between conjugate and the total antibody exposures . It is worth noting that four out of the seven ADCs (i.e., gemtuzumab ozogamicin, brentuximab vedotin, T-DM1 and enfortumab vedotin) use the data from single dose level in the exposure-efficacy analysis given efficacy data is indication-specific and only one dose level is usually studied in the pivotal study. Similar to ER analysis of other cancer-targeting biologics, caution needs to be taken to interpret the ER results of an ADC when the analyses are performed with data with only single dose levels as the effect of disease severity on ADC exposure may confound ER relationship (i.e., a visual steep trend is seen when the true relationship is flat) . The exposure-safety, however, is less likely to be confounded as the safety data are often pooled across the multiple studies, dose levels and patient populations. As illustrated in Table , a range of clinically tested doses were included in the ER safety analysis for most of the seven approved ADCs, while only three out of the seven ADCs include multiple dose levels in the ER efficacy analyses. In summary, ER analysis provided valuable information beyond dose confirmation of the clinically tested dosage regimen in the phase 3 studies. We have illustrated the impact of ER analyses of gemtuzumab ozogamicin to enable test a fractionated lower dose thus leading to the re-approval of gemtuzumab ozogamicin. Additionally, ER analyses could guide dose adjustment. For brentuximab vedotin, the positive ER relationships with peripheral neuropathy and neutropenia support the dose reduction recommendation from 1.8 to 1.2 mg/kg in the event of Grade 2 + peripheral neuropathy and Grade 4 + neutropenia . Furthermore, ER analysis could be used to identify the appropriate therapeutic dose for phase 2. For trastuzumab deruxtecan, ER analysis identified two potential phase 2 doses of 5.4 and 6.4 mg/kg from phase 1 data and confirmed the final dose recommendation of 5.4 mg/kg in pivotal studies based on similar predicted ORR probability (ORR 90% CI 0.63 [0.55, 0.70] and 0.68 [0.58, 0.77] for 5.4 mg/kg and 6.4 mg/kg, respectively) and exposure-safety relationships with greater rate of AEs in the 6.4 mg/kg group compared to the 5.4 mg/kg group . ADCs represent a rapidly evolving area of oncology drug development and hold significant promise. The complex structure of ADCs poses unique challenges to clinical pharmacology strategy in supporting development and approval of ADCs, since it requires a quantitative understanding of the PK and PD properties of multiple different molecular species (e.g., ADC conjugate, total antibody and unconjugated payload) in the systemic circulation and/or tissues of interest (e.g., tumors). Integration of diverse clinical pharmacology approaches, ranging from dedicated clinical pharmacology studies (e.g., DDI, QTc, renal/hepatic impairment study) to mechanistic and/or empirical models (e.g., PBPK, population PK modeling for one- or two- analytes, exposure–response analysis) can provide insights into the PK, PD and ADME properties of an ADC and inform development decision and clinical dose and schedule selection (Fig. ). An additional consideration for clinical development not discussed in this review includes the thorough assessment of immunogenicity on ADC PK, efficacy, and safety. As the field continues to evolve, the selection of suitable ADC targets and the identification of a target population remain critical challenges. Efforts to further optimize “next-generation” ADCs using engineered antibodies, innovative linkers, conjugation methods, and novel payloads are rapidly advancing. Despite the great success of ADCs, it is worth noting that the therapeutic window for ADCs remains relatively narrow with the maximum tolerated dose (MTD) often reached before ADCs achieve the maximum efficacious dose. Additionally, the toxicities associated with the ADCs might dictate the number of treatment cycles that the patients can tolerate and often result in dose delay, dose reductions or study discontinuation . The future success of ADCs in part will depend on our ability to overcome these developmental challenges, especially by developing clear strategies to optimize the dose and schedule of ADCs and identifying predictive biomarkers to assess response, optimize patient selection, and inform potential combination therapies.
Heart Rate Variability as a Digital Biomarker in Adolescents and Young Adults Receiving Hematopoietic Cell Transplantation
e816a688-d915-4651-9afc-148346918e6d
11843223
Surgical Procedures, Operative[mh]
Introduction Adolescents and young adults (AYAs) with cancer are at high risk for poor psychosocial outcomes. In addition to the more visible physical side effects of cancer and its treatment, AYAs endorse significantly worse mental health than their non‐cancer peers . Cancer disrupts the normal trajectories of identity development, educational and vocational attainment, and emerging independence that are hallmarks of adolescence and early adulthood. These challenges may be more pronounced for AYAs receiving hematopoietic cell transplantation (HCT) to treat their disease. HCT is a highly intensive treatment with increased risks of short‐ and long‐term morbidity. Nearly half of pediatric patients experience significant increases in anxiety, depression, and loneliness following transplant, and over 80% of patients report moderate post‐traumatic stress symptoms 3 months post‐HCT . Emerging experimental and translational data are beginning to shed light on the complex biopsychosocial mechanisms relevant to cancer . However, the overwhelming majority of this research is focused on adults with cancer, leaving a gap in our understanding of the biobehavioral landscape in AYA oncology. Identifying biomarkers of psychosocial constructs is a key step toward understanding the converging psychological and biological processes in cancer. Fortunately, new tools in behavioral quantification and digital health technology (DHT) can facilitate scientific progress toward this goal . Digital biomarkers of psychosocial states, such as stress, symptom burden, and mood, can improve care delivery and patient outcomes . One such digital biomarker is heart rate variability (HRV), which is the fluctuation in intervals between successive heartbeats. HRV is a multidimensional measure of autonomic nervous system regulatory activity and has been associated with meaningful clinical and psychosocial outcomes . Indeed, lower HRV (indicating less ‘autonomic flexibility’) has been associated with anxiety , depression , and adolescent emotion regulation , as well as cancer‐related fatigue and mortality . Impaired HRV has been observed in the adult HCT and HCT survivor population , and there have been few studies in AYAs receiving transplant. The objectives of this study were to prospectively describe the trajectory of HRV among AYAs undergoing HCT and explore the relationship between HRV and patient‐reported outcomes (PROs). We hypothesized that HRV would correlate with PROs, and that baseline HRV would predict change in PROs over time. Materials and Methods 2.1 Design This was a prospective study embedded in a randomized clinical trial (RCT) testing the Promoting Resilience in Stress Management (PRISM) resilience intervention compared to usual psychosocial care in AYAs with cancer receiving HCT (NCT03640325) . The primary aim of the trial was to determine the intervention's effects on patient‐reported anxiety and depression symptoms. Participants for this optional ancillary HRV study were recruited from 3 sites (Seattle Children's Hospital, St. Jude Children's Research Hospital, and Children's Hospital of Los Angeles) from January 2019 to March 2023. The study was approved at each hospital's Institutional Review Board. 2.2 Participants Eligible participants were those aged 12–24 years within 4 weeks of receiving an allogeneic or autologous HCT for a malignancy or cancer predisposition syndrome who spoke English or Spanish and were cognitively able to participate in study activities. Written informed consent was obtained for participants > 18 years old; written informed assent and caregiver consent was obtained for participants 12–17 years. 2.3 Study Procedures Among enrolled participants, we recorded HRV at baseline, 1‐, and 3‐months post HCT to align with RCT time points and salient clinical milestones (stem cell engraftment and transition to less intensive HCT care). PROs were collected at baseline and 3 months and included validated surveys of anxiety and depression (Hospital Anxiety and Depression Scale (HADS)) , quality of life (Pediatric Quality of Life (PedsQL) Generic and Cancer Module Teen Reports) , hope (Snyder Hope Scale) , and resilience [Connor‐Davidson Resilience Scale (CD‐RISC)] . HRV was quantified using the Actiheart 5 wearable device (CamnTech Inc., UKFDA class 2, 510(k) number K052489). The Actiheart 5 is an FDA‐approved, lightweight, wireless electrocardiogram (ECG) monitor with established validity and reliability . The device was attached to a patient's torso via two standard ECG electrodes and worn for a 24‐h period, the gold standard for HRV assessment. Comprehensive HRV output metrics were computed by the Actiheart Software (CamnTech Inc) using a 256 Hz sampling rate, bandwidth of 0.05–55 Hz, and 5‐min epochs. Time and frequency domain measures were derived per established guidelines, including the standard deviation of normal‐to‐normal beats (SDNN), root mean square of successive differences (RMSSD), high frequency (HF), low frequency (LF), and the LF/HF ratio . Prior to analysis, two study team members manually inspected raw ECG data and removed artifacts not detected by the Actiheart software. We identified analytic windows with the longest contiguous segment of high‐quality data and without relevant abnormalities (ectopy, wide QRS intervals, arrythmias). Overnight tracings were preferentially selected to minimize motion artifact and circadian variation. Medical record abstraction identified clinical variables relevant to HRV, including coexisting cardiac conditions, cardioactive medications, abnormal echocardiograms or ECGs, and concomitant infectious or febrile illness. HRV data were collected and analyzed in accordance with the Guidelines for Reporting Articles on Psychiatry and Heart rate variability . 2.4 Statistical Design and Analysis We described patient baseline characteristics using counts and percentages or medians and interquartile ranges. We summarized HRV measures by (1) sex and (2) intervention arm at each time point using means, medians, and interquartile ranges. Patient‐reported outcome scores were summarized at each time point using means and standard deviations. Pearson correlation coefficients were used to assess the relationship between baseline HRV and PROs, as well as the relationship between HRV and change in anxiety symptoms from baseline to 3 months in the pooled cohort. There was insufficient 3‐month HRV data in this cohort to reliably perform correlation testing, and so this was not included in this analysis. Design This was a prospective study embedded in a randomized clinical trial (RCT) testing the Promoting Resilience in Stress Management (PRISM) resilience intervention compared to usual psychosocial care in AYAs with cancer receiving HCT (NCT03640325) . The primary aim of the trial was to determine the intervention's effects on patient‐reported anxiety and depression symptoms. Participants for this optional ancillary HRV study were recruited from 3 sites (Seattle Children's Hospital, St. Jude Children's Research Hospital, and Children's Hospital of Los Angeles) from January 2019 to March 2023. The study was approved at each hospital's Institutional Review Board. Participants Eligible participants were those aged 12–24 years within 4 weeks of receiving an allogeneic or autologous HCT for a malignancy or cancer predisposition syndrome who spoke English or Spanish and were cognitively able to participate in study activities. Written informed consent was obtained for participants > 18 years old; written informed assent and caregiver consent was obtained for participants 12–17 years. Study Procedures Among enrolled participants, we recorded HRV at baseline, 1‐, and 3‐months post HCT to align with RCT time points and salient clinical milestones (stem cell engraftment and transition to less intensive HCT care). PROs were collected at baseline and 3 months and included validated surveys of anxiety and depression (Hospital Anxiety and Depression Scale (HADS)) , quality of life (Pediatric Quality of Life (PedsQL) Generic and Cancer Module Teen Reports) , hope (Snyder Hope Scale) , and resilience [Connor‐Davidson Resilience Scale (CD‐RISC)] . HRV was quantified using the Actiheart 5 wearable device (CamnTech Inc., UKFDA class 2, 510(k) number K052489). The Actiheart 5 is an FDA‐approved, lightweight, wireless electrocardiogram (ECG) monitor with established validity and reliability . The device was attached to a patient's torso via two standard ECG electrodes and worn for a 24‐h period, the gold standard for HRV assessment. Comprehensive HRV output metrics were computed by the Actiheart Software (CamnTech Inc) using a 256 Hz sampling rate, bandwidth of 0.05–55 Hz, and 5‐min epochs. Time and frequency domain measures were derived per established guidelines, including the standard deviation of normal‐to‐normal beats (SDNN), root mean square of successive differences (RMSSD), high frequency (HF), low frequency (LF), and the LF/HF ratio . Prior to analysis, two study team members manually inspected raw ECG data and removed artifacts not detected by the Actiheart software. We identified analytic windows with the longest contiguous segment of high‐quality data and without relevant abnormalities (ectopy, wide QRS intervals, arrythmias). Overnight tracings were preferentially selected to minimize motion artifact and circadian variation. Medical record abstraction identified clinical variables relevant to HRV, including coexisting cardiac conditions, cardioactive medications, abnormal echocardiograms or ECGs, and concomitant infectious or febrile illness. HRV data were collected and analyzed in accordance with the Guidelines for Reporting Articles on Psychiatry and Heart rate variability . Statistical Design and Analysis We described patient baseline characteristics using counts and percentages or medians and interquartile ranges. We summarized HRV measures by (1) sex and (2) intervention arm at each time point using means, medians, and interquartile ranges. Patient‐reported outcome scores were summarized at each time point using means and standard deviations. Pearson correlation coefficients were used to assess the relationship between baseline HRV and PROs, as well as the relationship between HRV and change in anxiety symptoms from baseline to 3 months in the pooled cohort. There was insufficient 3‐month HRV data in this cohort to reliably perform correlation testing, and so this was not included in this analysis. Results A total of 39 HRV recordings were collected from n = 16 patients at the three institutions; eight were randomized to the PRISM intervention, and eight received usual psychosocial care. Participants were aged 12–21 years, 53% were male, and just over half (56%) had a diagnosis of acute myelogenous leukemia (Table ). All participants received an allogeneic transplant, with cord blood as the most common graft source (47%). Nine participants had ECGs labeled as abnormal within 4 weeks of baseline HRV recordings, but the majority (6/9) were normal pediatric ECG variants, including nonspecific T wave abnormalities ( n = 3), sinus arrhythmia ( n = 2), and low voltage QRS ( n = 1). There were no documented concurrent infections or fevers during the HRV recordings. Median (IQR) baseline SDNN and RMSSD were 27 (21, 38) and 11 (8, 22) for males and 25 (22, 27) and 13 (4, 19) for females, respectively (Table ). For reference, published normative adolescent SDNN and RMSSD median (IQR) values are 63 (48, 85) and 59 (45, 88) for males and 66 (46, 87) and 69 (49, 100) for females, respectively (Figure ) . Among patients with longitudinal data, there were no detectable differences in HRV based on the intervention, nor did we identify consistent patterns for change in HRV metrics over time (Figures and ). When examining the relationship between baseline HRV and patient‐reported outcomes, there was a moderate inverse correlation between HRV and anxiety and depression as expected. Participants with lower (inferior) baseline SDNN or RMSSD reported higher anxiety and depression symptoms at baseline (anxiety: r = −0.35 ( p = 0.18) for SDNN, r = −0.47 ( p = 0.07) for RMSSD; depression: r = −0.26 ( p = 0.34) for SDNN, r = −0.39 ( p = 0.14) for RMSSD), (Figure ). We did not detect meaningful correlations between baseline HRV and other PROs (Figure ). Among a subset of patients with baseline anxiety symptoms elevated above the sample median (HADS Anxiety score > 5, n = 8), higher baseline HRV correlated with greater improvement in anxiety scores from baseline to 3 months ( r = −0.65 ( p = 0.08) for SDNN r = −0.31 ( p = 0.45) for RMSSD), (Figure ). Discussion In this prospective study of AYAs receiving HCT, there was a moderately strong correlation between measures of HRV and patient‐reported symptoms of anxiety and depression. This relationship was most pronounced among AYAs with elevated symptoms of anxiety, which is consistent with other studies linking reduced HRV and anxiety disorders . We did not detect correlations between HRV and patient‐reported resilience, hope, or health related quality of life. Compared to published values for healthy adolescents, AYA participants had inferior baseline measures of HRV, and this did not improve substantially over time. Although our numbers are small, these findings are consistent with prior work in this field . One shared pathway that may at least in part explain the relationship between psychological symptoms, HRV, and health impacts involves the autonomic nervous system. Excessive or prolonged sympathetic nervous system activation, which commonly occur with anxiety or worry , is a known risk factor for cardiovascular and other chronic diseases and may have important cancer‐related implications as well. Sympathetic nervous system regulation governs aspects of stem cell biology and trafficking, the tumor microenvironment, metastatic potential, oncogene transcription, and immune cell function . Thus, sympathetic nervous system dysregulation has the potential to directly influence cancer biology and clinical outcomes. Leveraging HRV to more fully understand this pathway could be especially useful among AYAs with cancer, given the high prevalence of anxiety and generally inferior cancer‐related outcomes in this population. Among participants with elevated baseline anxiety scores, more favorable baseline HRV profiles correlated with greater improvement in anxiety scores over time. This may suggest that higher ‘autonomic flexibility’ as indexed by higher HRV identifies a subset of individuals with a greater capacity for responsiveness to an intervention. In this way, digital biomarkers like HRV could augment standard psychosocial or clinical characteristics when interpreting results of behavioral intervention trials and ultimately enhance precision when risk‐stratifying and risk‐adapting resource‐limited interventions. For example, a ‘stepped care’ model that integrates patient‐reported and HRV biomarker data could be used to designate psychosocial risk groups. Higher intensity behavioral intervention (earlier, more frequent, one‐on‐one, specialized, etc.) could then be allocated to higher risk subgroups. This study had notable limitations that should be considered when interpreting the results. First, although this was a multisite study, we enrolled participants during the peak of the COVID‐19 pandemic, resulting in a small sample size. However, HRV data quality for each participant was excellent, optimized through the use of 24‐h ECG‐derived recordings and stringent quality control procedures. Additional participant characteristics, including prior total anthracycline exposure, physical fitness, or sleep quality, were incompletely captured and could be contributing to HRV metrics. Although this study was embedded in an RCT testing the PRISM behavioral intervention, there were too few longitudinal data points to formally test the effects of the intervention on HRV change over time. With fewer pandemic‐related research restrictions, ongoing work is focused on prospective data collection in a larger patient population. Conclusion In summary, our data support a relationship between HRV and psychosocial symptoms among AYAs receiving HCT. The field of digital psychosocial biomarker science is growing rapidly and provides novel tools to help shape the future of cancer research and clinical care . Wearables that capture HRV offer a minimally burdensome, non‐invasive way to collect real‐time, high‐resolution biometric data that can be integrated into prognostic or therapeutic studies and clinical care. The ‘digital native’ AYA population may be especially suited to the integration of wearable technology into their daily lives, making HRV an appealing biomarker in this group. Additional work should build on this data to validate clinically meaningful HRV cut points in the AYA population. Future studies should also incorporate complementary psychosocial biomarker platforms, such as smartphone digital phenotyping or social genomics , that can help facilitate a deeper understanding of complex biobehavioral processes and enhance the infrastructure of precision supportive care in oncology. Mallory R. Taylor: conceptualization (lead), data curation (lead), formal analysis (supporting), funding acquisition (lead), investigation (lead), methodology (equal), writing – original draft (lead). Miranda C. Bradford: conceptualization (supporting), formal analysis (lead), methodology (supporting), visualization (lead), writing – review and editing (equal). Chuan Zhou: methodology (supporting), validation (supporting). Kaitlyn M. Fladeboe: methodology (supporting), project administration (supporting), writing – review and editing (equal). Jorie F. Wittig: data curation (supporting), project administration (supporting), writing – review and editing (supporting). K. Scott Baker: conceptualization (supporting), methodology (supporting), supervision (equal), writing – review and editing (equal). Joyce P. Yi‐Frazier: conceptualization (supporting), investigation (supporting), methodology (supporting), supervision (supporting), writing – review and editing (equal). Abby R. Rosenberg: conceptualization (supporting), funding acquisition (supporting), methodology (supporting), resources (supporting), supervision (lead), writing – review and editing (equal). This study was approved by the Institutional Review Board at each participating site. The authors declare no conflicts of interest. Figure S1: Baseline cohort heart rate variability metrics by sex compared to healthy adolescents. HRV = heart rate variability, SDNN = standard deviation of normal‐to‐normal beats, RMSSD = root mean square of successive differences. Figure S2: Individual (circles) and group mean (lines) heart rate variability metrics by sex over time. HRV = heart rate variability, SDNN = standard deviation of normal‐to‐normal beats, RMSSD = root mean square of successive differences, HF = high frequency, LF = low frequency, T2 = 1 month, T3 = 3 months, T4 = 6 months. Figure S3: Individual (circles) and group mean (lines) heart rate variability metrics by intervention arm over time. HRV = heart rate variability, SDNN = standard deviation of normal‐to‐normal beats, RMSSD = root mean square of successive differences, HF = high frequency, LF = low frequency, T2 = 1 month, T3 = 3 months, T4 = 6 months, PRISM = Promoting Resilience in Stress Management.
Expression of stem cell markers SALL4, LIN28A, and KLF4 in ameloblastoma
eaf8338b-f3ff-4bfa-9656-b752cd8f7091
10413759
Anatomy[mh]
Ameloblastomas (AME) are odontogenic tumours of epithelial origin. Although classified as a form of benign tumour, AME is characterised by aggressiveness, an infiltrative nature, and a high recurrence tendency. They can grow to large dimensions and invade adjacent structures, causing significant morbidity . According to the most recent classification by the World Health Organisation (WHO), AME can be categorised into three types: conventional, unicystic, and extraosseous/peripheral. Among these, conventional AME is the most common and most aggressive . The current therapeutic options include both conservative and radical approaches . Conservative treatments involve enucleation and curettage , which avoid relevant facial deformities but have recurrence rates of up to 55% . More invasive treatments involve marginal resection, which is the method of choice, considering the high recurrence rates reported for AME . However, this method can result in relapse rates of up to 15% in more aggressive AME types as well as significant facial deformities . Although the causes of the aggressive biological behaviour and high rate of recurrence of this benign neoplasm are not fully understood, studies involving stem cells and their possible relationship with the aetiopathogenesis of neoplasms have been relevant in elucidating this behaviour . Stem cells can perpetuate themselves through self-renewal mechanisms and differentiate into cells in specific tissues. These mechanisms are like those that occur in tumour cells and involve similar signalling pathways. Thus, both normal stem cells and tumorigenic cells have proliferative potential and the ability to give rise to new tissues called cancer stem cells . Cancer stem cells proliferate uncontrollably, driving the formation and growth of tumors, generating heterogeneous malignant cells associated with recurrence and metastasis. It is believed that cancer cells can appropriate the self-renewal machinery that is normally expressed in normal stem cells . It has been reported that AME cells originate from odontogenic stem cells located in the dental lamina and that tumours are likely initiated in normal stem cells that contain a perpetual minority of cancer stem cells . In this context, the SALL4 (Spalt-Like Transcription Factor 4), LIN28A (LIN28 homolog A), and KLF4 (Kruppel-like factor 4) proteins, which act as essential regulators of pluripotency and embryonic self-renewal and can mediate tumour progression and differentiation, are relevant biomarkers for the analysis of stem cells . SALL4 is an essential transcription factor for the maintenance of self-renewal and pluripotency of embryonic stem cells that occurs in early embryonic development . Its expression is downregulated after birth and is absent in most adult human tissues. Specifically, its expression in adult tissues is restricted to germ cells , except for human CD34+ haematopoietic stem cells . However, its high expression and dysregulation have been demonstrated in several types of cancer , such as leukemia, germ cell tumours, hepatocellular carcinoma, and lung cancer , where it acts as an oncogene and participates in the processes of initiation, development, and progression of cancer . LIN28A is a highly conserved ribonucleic acid (RNA)-binding protein that plays a key role in cell development and pluripotency by regulating the process of cell proliferation and differentiation . It is expressed in the embryos, stem cells, and embryonic carcinoma cells . Its performance occurs both physiologically (i.e. through the renewal and differentiation of stem cells, tissue repair, and glucose metabolism) and pathologically, where high levels are correlated with advanced malignant tumours, poor prognosis, and increased risk of recurrence . KLF4 is a transcription factor that regulates cellular processes of development, differentiation, proliferation, and apoptosis. Depending on the cell type, KLF4 acts as both a tumour suppressor and an oncogene . Furthermore, KLF4 is involved in stem cell renewal and maintenance of pluripotency . The protein–protein interaction network was analysed for SALL4, LIN28A, and KLF4 proteins using bioinformatics with the STRING (Search Tool for Recurring Instances of Neighbouring Genes) platform . According to the STRING platform, a direct association among them was demonstrated in all interactions obtained from the selected databases and was confirmed using text mining analysis. Interactions among LIN28A, KLF4, SALL4, and KLF4 were determined experimentally. Additionally, the platform demonstrated protein homology between SALL4 and KLF4, and computationally identified co-expression of LIN28A and SALL4 based on transcript-transcript interactions (see Fig. ). Understanding the molecular mechanisms underlying AME through the expression of stem cell biomarkers can help elucidate the role of these cells in the aetiopathogenesis of this tumour. Therefore, the present study aimed to evaluate the in situ and in vitro expression of stem cell markers SALL4, LIN28A, and KLF4 in AME. This is the first work to simultaneously investigate these three proteins in this benign neoplasm. Ethical aspects This study was approved by the Comitê de Ética em Pesquisa com Seres Humanos – Universidade Federal do Pará, Belém, Pará, Brazil, Committee Reference No.: 5.490.937. Samples For the in vitro study, the cell line derived from human AME, called AME-hTERT, established at the Cell Culture Laboratory of the Faculty of Dentistry, Universidade Federal do Pará (UFPA), was used . For the in situ study, 21 cases of conventional AME, ten cases of dentigerous cysts (DC), and ten cases of dental follicles (DF) were collected at the Laboratory of Pathological Anatomy and Immunohistochemistry of the Graduate Program in Dentistry, Universidade Federal do Pará, and Centro Universitário do Pará (CESUPA). In the in situ study, the cases of DC and DF were used as comparative samples, considering that DC and AME are benign odontogenic lesions but present with less aggressive behaviour and a low incidence of recurrence. Meanwhile, DF is a tissue without pathological neoplastic changes of odontogenic origin. Cell cultivation The cell line derived from AME, established, and characterised at the Laboratory of Cell Culture of the Graduate Program of the Universidade Federal do Pará (UFPA), was cultivated in DMEM/F-12 medium (Sigma Chemical Co., St. Louis, MO, USA), supplemented with 10% foetal bovine serum (Gibco, Carlsbad, CA, USA), 1% penicillin (Gibco®) and 0.1% antifungal (Gibco®). The cells were kept in an incubator at a temperature of 37ºC and a humid atmosphere containing 5% CO 2 . Cell proliferation was observed daily using an inverted phase-contrast microscope (Axiovert 40 C – Zeiss) equipped with a coupled camera (AxioCam MRc, Zeiss). Immunohistochemistry The immunohistochemistry technique used in the present study was performed according to the following protocol. First, deparaffinization of the slides in xylene and hydration in decreasing alcohol concentrations (100%, 90%, 80% and 70%) was conducted. Then endogenous peroxidases were blocked by immersing the slides in 3% H 2 O 2 to methanol at a 1:1 ratio for 30 min. Antigenic recovery was conducted in a Pascal pressure chamber (Dako Cytomation, Carpinteria, CA, USA) in a citrate buffer (pH 6.0) for 30 s with a temperature of 123°C and pressure of 13.5 psi. Finally, non-specific antibodies were blocked with 1% bovine serum albumin (BSA; Sigma-Aldrich) in a phosphate-buffered saline solution for 1 h. Incubation of primary antibodies for Anti-Sall4 (1:25, mouse, Santa Cruz Biotechnology, Santa Cruz, CA, USA) was conducted for 12–14 h (overnight) and for 1 h for the Anti-LIN-28 (1:30, mouse, Santa Cruz Biotechnology ®) and Anti-GKLF (1:100, mouse, Santa Cruz Biotechnology®) in the AME, DC, and DF samples. Secondary antibodies were incubated with an Immunoprobe Plus detection system (Advanced Biosystems, San Francisco, CA, USA) for 30 min. Diaminobenzidine chromogen (ScyTek, Logan, UT, USA) was used. The slides were counterstained with haematoxylin (Sigma-Aldrich) and mounted using Permount (Fisher Scientific, Fair Lawn, NJ, USA). Testicular seminoma samples were used as positive controls. As a negative control, the primary antibody was replaced with BSA (Sigma-Aldrich) or non-immune serum. Immunohistochemical evaluation The immunohistochemical (IHC) evaluation was performed by measuring the area (μm) and fraction of SALL4, LIN28A, and KLF4 and labelling in the AME, DC, and DF samples. Images were obtained using an Axioscope A1 microscope (Zeiss®) equipped with an AxioCam HRC colour CCD camera (Zeiss®) with a bright field. Five areas in each sample were selected based on the quantity and morphological preservation of the parenchyma. Images were acquired using a 40x objective. Areas of the tumour parenchyma were separated and segmented using the “IHC Image Analysis Toolbox” plug in (Jie Shu, Guoping Qiu and Mohammad Ilyas, https://imagej.nih.gov/ij/plugins/ihc-toolbox/index.html ) using ImageJ (public domain software developed by Wayne Rasband (NIMH, National Institutes of Health, Bethesda, MD, USA; http://rsbweb.nih.gov/ij/ ). After segmenting the images, the area and diaminobenzidine (DAB) staining fraction were measured, and the immunostaining differences found in AME, DC, and DF were analysed. Indirect immunofluorescence AME-hTERT cells were cultured on glass coverslips in 24-well plates and subjected to indirect immunofluorescence to observe the expression of SALL4, LIN28A, and KLF4. The technique involved the following steps: cell fixation in 2% paraformaldehyde for 10 min, washing with phosphate-buffered saline (PBS), permeabilisation of the membrane with a 0.5% Triton X-100 solution (Sigma®) for 15 min, wash with PBS, incubation in PBS/1% BSA for 30 min, and incubation with primary monoclonal antibodies diluted in PBS/BSA at 1% for a minimum of 12 h and a maximum of 18 h in a humid chamber at 4ºC. The primary antibodies used were Anti-Sall4 (1:50, mouse, Santa Cruz Biotechnology®), Anti-LIN-28 (1:50, mouse, Santa Cruz Biotechnology®), and anti-GKLF (1:50, mouse, Santa Cruz Biotechnology®). To detect the primary antibody, incubation was performed in a solution containing the secondary antibody conjugated to Alexa Fluor 488 (Invitrogen, Carlsbad, CA, USA) for 1 h in a humid and dark chamber at room temperature. Nuclei were stained with Hoechst 33258 (1: 2,000, Sigma) and cytoskeletons were stained with Alexa Fluor 568 phalloidin (Life Technologies, Carlsbad, CA, USA). The coverslips were immersed in PBS and distilled water and mounted using the ProLong® Gold antifade reagent (Invitrogen®). Next, the cells were analysed using a fluorescence microscope (AxioScope.A1, Zeiss) equipped with a digital camera (AxiocamMRc, Zeiss). Images of the slides were obtained for the registration of immunoexpression using a 40x objective. As a negative control, the primary antibody was replaced with a non-immune serum. This study was approved by the Comitê de Ética em Pesquisa com Seres Humanos – Universidade Federal do Pará, Belém, Pará, Brazil, Committee Reference No.: 5.490.937. For the in vitro study, the cell line derived from human AME, called AME-hTERT, established at the Cell Culture Laboratory of the Faculty of Dentistry, Universidade Federal do Pará (UFPA), was used . For the in situ study, 21 cases of conventional AME, ten cases of dentigerous cysts (DC), and ten cases of dental follicles (DF) were collected at the Laboratory of Pathological Anatomy and Immunohistochemistry of the Graduate Program in Dentistry, Universidade Federal do Pará, and Centro Universitário do Pará (CESUPA). In the in situ study, the cases of DC and DF were used as comparative samples, considering that DC and AME are benign odontogenic lesions but present with less aggressive behaviour and a low incidence of recurrence. Meanwhile, DF is a tissue without pathological neoplastic changes of odontogenic origin. The cell line derived from AME, established, and characterised at the Laboratory of Cell Culture of the Graduate Program of the Universidade Federal do Pará (UFPA), was cultivated in DMEM/F-12 medium (Sigma Chemical Co., St. Louis, MO, USA), supplemented with 10% foetal bovine serum (Gibco, Carlsbad, CA, USA), 1% penicillin (Gibco®) and 0.1% antifungal (Gibco®). The cells were kept in an incubator at a temperature of 37ºC and a humid atmosphere containing 5% CO 2 . Cell proliferation was observed daily using an inverted phase-contrast microscope (Axiovert 40 C – Zeiss) equipped with a coupled camera (AxioCam MRc, Zeiss). The immunohistochemistry technique used in the present study was performed according to the following protocol. First, deparaffinization of the slides in xylene and hydration in decreasing alcohol concentrations (100%, 90%, 80% and 70%) was conducted. Then endogenous peroxidases were blocked by immersing the slides in 3% H 2 O 2 to methanol at a 1:1 ratio for 30 min. Antigenic recovery was conducted in a Pascal pressure chamber (Dako Cytomation, Carpinteria, CA, USA) in a citrate buffer (pH 6.0) for 30 s with a temperature of 123°C and pressure of 13.5 psi. Finally, non-specific antibodies were blocked with 1% bovine serum albumin (BSA; Sigma-Aldrich) in a phosphate-buffered saline solution for 1 h. Incubation of primary antibodies for Anti-Sall4 (1:25, mouse, Santa Cruz Biotechnology, Santa Cruz, CA, USA) was conducted for 12–14 h (overnight) and for 1 h for the Anti-LIN-28 (1:30, mouse, Santa Cruz Biotechnology ®) and Anti-GKLF (1:100, mouse, Santa Cruz Biotechnology®) in the AME, DC, and DF samples. Secondary antibodies were incubated with an Immunoprobe Plus detection system (Advanced Biosystems, San Francisco, CA, USA) for 30 min. Diaminobenzidine chromogen (ScyTek, Logan, UT, USA) was used. The slides were counterstained with haematoxylin (Sigma-Aldrich) and mounted using Permount (Fisher Scientific, Fair Lawn, NJ, USA). Testicular seminoma samples were used as positive controls. As a negative control, the primary antibody was replaced with BSA (Sigma-Aldrich) or non-immune serum. The immunohistochemical (IHC) evaluation was performed by measuring the area (μm) and fraction of SALL4, LIN28A, and KLF4 and labelling in the AME, DC, and DF samples. Images were obtained using an Axioscope A1 microscope (Zeiss®) equipped with an AxioCam HRC colour CCD camera (Zeiss®) with a bright field. Five areas in each sample were selected based on the quantity and morphological preservation of the parenchyma. Images were acquired using a 40x objective. Areas of the tumour parenchyma were separated and segmented using the “IHC Image Analysis Toolbox” plug in (Jie Shu, Guoping Qiu and Mohammad Ilyas, https://imagej.nih.gov/ij/plugins/ihc-toolbox/index.html ) using ImageJ (public domain software developed by Wayne Rasband (NIMH, National Institutes of Health, Bethesda, MD, USA; http://rsbweb.nih.gov/ij/ ). After segmenting the images, the area and diaminobenzidine (DAB) staining fraction were measured, and the immunostaining differences found in AME, DC, and DF were analysed. AME-hTERT cells were cultured on glass coverslips in 24-well plates and subjected to indirect immunofluorescence to observe the expression of SALL4, LIN28A, and KLF4. The technique involved the following steps: cell fixation in 2% paraformaldehyde for 10 min, washing with phosphate-buffered saline (PBS), permeabilisation of the membrane with a 0.5% Triton X-100 solution (Sigma®) for 15 min, wash with PBS, incubation in PBS/1% BSA for 30 min, and incubation with primary monoclonal antibodies diluted in PBS/BSA at 1% for a minimum of 12 h and a maximum of 18 h in a humid chamber at 4ºC. The primary antibodies used were Anti-Sall4 (1:50, mouse, Santa Cruz Biotechnology®), Anti-LIN-28 (1:50, mouse, Santa Cruz Biotechnology®), and anti-GKLF (1:50, mouse, Santa Cruz Biotechnology®). To detect the primary antibody, incubation was performed in a solution containing the secondary antibody conjugated to Alexa Fluor 488 (Invitrogen, Carlsbad, CA, USA) for 1 h in a humid and dark chamber at room temperature. Nuclei were stained with Hoechst 33258 (1: 2,000, Sigma) and cytoskeletons were stained with Alexa Fluor 568 phalloidin (Life Technologies, Carlsbad, CA, USA). The coverslips were immersed in PBS and distilled water and mounted using the ProLong® Gold antifade reagent (Invitrogen®). Next, the cells were analysed using a fluorescence microscope (AxioScope.A1, Zeiss) equipped with a digital camera (AxiocamMRc, Zeiss). Images of the slides were obtained for the registration of immunoexpression using a 40x objective. As a negative control, the primary antibody was replaced with a non-immune serum. Data were analysed using GraphPad Prism 8 software (GraphPad Software Inc., San Diego, CA, USA). When parametric distribution was evidenced by the Shapiro-Wilk test, differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Tukey’s post-hoc test. When a non-parametric distribution was evidenced by the Shapiro-Wilk test, the differences between groups were evaluated by the Kruskal-Wallis test, followed by Dunn’s post-test of multiple comparisons. A 95% confidence interval (CI) was assumed (α = 0.05). Demographical and clinical data, and histopathological typing of patients with AME In the studied sample, the mean age was 37 years. Males represented 57% of the samples. The region of greatest involvement was the mandible, totalling 95% of the cases. As for the histological types, eight cases were of the follicular type, eight of plexiform, three acanthomatous and two of granular cells (see Table ). Immunohistochemical staining for SALL4, LIN28a, and KLF4 in ameloblastoma, dentigerous cyst, and dental follicle IHC staining for SALL4, LIN28A, and KLF4 was predominantly observed in the epithelial cells of the tumour islands (see Fig. ). For SALL4, intense staining was observed in the tumour parenchyma in both the nucleus and cytoplasm of the cells, in the high columnar cells of the periphery, and in the central cells of the tumour island. LIN28A showed strong immunostaining with nuclear and cytoplasmic localisation limited to the central cells of the tumour island. Intense immunostaining was observed for KLF4 nuclear localisation in epithelial cells. The labelling was predominantly located in the nuclei of tall columnar cells at the periphery of the tumour island. Nuclear and cytoplasmic markings of SALL4, LIN28A, and KLF4 were also observed in some cases of DC and DF; however, the expression of SALL4, LIN28A, and KLF4 was significantly higher in AME samples compared to DC ( p < 0.001) and DF ( p < 0.001), as demonstrated in the statistical analysis (see Fig. ). AME-hTERT lineage expressed SALL4, LIN28A, and KLF4 The immunofluorescence assays revealed that the AME-hTERT strain expressed SALL4, LIN28A, and KLF4 (see Fig. ). Neoplastic cells demonstrated nuclear and cytoplasmic expression of SALL4 (see Fig. A), nuclear and cytoplasmic expression, with nuclear predominance of LIN28A (see Fig. E) and predominantly nuclear expression of KLF4 (Fig. I). The immunoexpression of all proteins was granular (see Fig. ). In the studied sample, the mean age was 37 years. Males represented 57% of the samples. The region of greatest involvement was the mandible, totalling 95% of the cases. As for the histological types, eight cases were of the follicular type, eight of plexiform, three acanthomatous and two of granular cells (see Table ). IHC staining for SALL4, LIN28A, and KLF4 was predominantly observed in the epithelial cells of the tumour islands (see Fig. ). For SALL4, intense staining was observed in the tumour parenchyma in both the nucleus and cytoplasm of the cells, in the high columnar cells of the periphery, and in the central cells of the tumour island. LIN28A showed strong immunostaining with nuclear and cytoplasmic localisation limited to the central cells of the tumour island. Intense immunostaining was observed for KLF4 nuclear localisation in epithelial cells. The labelling was predominantly located in the nuclei of tall columnar cells at the periphery of the tumour island. Nuclear and cytoplasmic markings of SALL4, LIN28A, and KLF4 were also observed in some cases of DC and DF; however, the expression of SALL4, LIN28A, and KLF4 was significantly higher in AME samples compared to DC ( p < 0.001) and DF ( p < 0.001), as demonstrated in the statistical analysis (see Fig. ). The immunofluorescence assays revealed that the AME-hTERT strain expressed SALL4, LIN28A, and KLF4 (see Fig. ). Neoplastic cells demonstrated nuclear and cytoplasmic expression of SALL4 (see Fig. A), nuclear and cytoplasmic expression, with nuclear predominance of LIN28A (see Fig. E) and predominantly nuclear expression of KLF4 (Fig. I). The immunoexpression of all proteins was granular (see Fig. ). The studied proteins showed significantly higher levels of immunostaining in AME cells than in the DC and DF cells. SALL4, LIN28A, and KLF4 were expressed in the AME parenchyma, with slight staining observed in some cells of the odontogenic epithelium of DC and DF. Furthermore, immunoexpression of the studied proteins was observed in the AME-hTERT strain. SALL4 is a transcription factor that plays a key role in maintaining pluripotency and self-renewal of embryonic stem cells . It interacts with other important regulatory proteins of embryonic pluripotency — OCT4 (octamer-binding protein 4), SOX-2 (HMG-box gene 2 related to SRY), and NANOG (homeodomain protein) — forming an autoregulatory circuit in which each of these proteins regulates its own expression and that of others . High expression of SOX-2, NANOG, and OCT4 has been demonstrated in AME , suggesting that these proteins may act together to maintain undifferentiated stem cells in this tumour. In the present study, cells from the AME tumour islands and the AME-hTERT lineage showed nuclear and cytoplasmic expression of SALL4, corroborating the marking pattern found by other studies in oral squamous cell carcinoma that suggests that this protein plays an important role in the progression of oral cancer and may serve as a potential therapeutic target . Nuclear labelling indicated the transcriptional activity of SALL4. Such activity is associated with transcriptional repression mechanisms that prevent stem cell differentiation and increase the proliferation of undifferentiated cells . SALL4 cytoplasmic marking has also been demonstrated in breast cancer cells and is considered a predictor of poor prognosis . Studies have shown that SALL4 protein expression is negatively regulated by miRNAs (miRNAs) belonging to the Let-7 family, particularly by miR-98, which leads to a reduction in tumour cell proliferation, indicating that miR-98 acts as a tumour suppressor that inhibits SALL4 protein expression . It is important to emphasise that LIN28 protein can downregulate the Let-7 microRNA family through the activation of its isoforms LIN28A or LIN28B . This suggests that the upregulation of LIN28 leads to the inhibition of miR-98, which, in turn, leads to the upregulation of SALL4. Therefore, the co-expression of SALL4 and LIN28A in AME observed in this study may play a significant role in tumour pathogenesis. LIN28A is a highly conserved RNA-binding protein that plays a significant role in development, glucose metabolism, and pluripotency . It is highly expressed in mouse embryonic stem cells, which decreases after differentiation, and in human embryonic carcinoma cells . Oral squamous cell carcinoma has been demonstrated to be associated with the regulation of the proliferative and invasive activities of this neoplasm . Furthermore, LIN28A has been identified as one of the four factors that convert fibroblasts into induced pluripotent stem cells, corroborating the role of this protein in pluripotent stem cells . In this study, LIN28A immunostaining showed nuclear and cytoplasmic localisation limited to the central cells of the AME tumour islands. The same expression pattern was observed for the AME-hTERT strain. LIN28A is predominantly cytoplasmic and located on ribosomes, P bodies, and cytoplasmic stress granules . The cytoplasmic expression found in the present study may be associated with its performance in the recruitment of terminal uridylyl transferase (TUTase4) ZCCHC11, which inhibits Let-7 processing in the cytoplasm and selectively blocks the expression of Let-7 miRNAs and their functions tumour suppressors, acting as an oncogene and promoting tumorigenesis . This action has been demonstrated in embryonic stem cells, suggesting a significant role of LIN28A in inhibiting cell differentiation through miRNAs in stem cells and certain types of cancer . Nuclear expression of LIN28A was observed when both RNA-binding domains were mutated . A model has been proposed in which LIN28A regulates the post-transcriptional processing of its mRNA targets by first binding to these targets in the nucleus and subsequently transporting them between ribosomes, P bodies, or stress granules for translation regulation, depending on the environmental conditions ; however, more studies are needed to better understand this process. The central region of the AME tumour islands, which exhibits greater LIN28A labelling, is more prone to hypoxia. As the tumour progresses, the concentration of oxygen in the microenvironment around the tumour cells decreases, leading to intratumoural hypoxia . In response to this condition, hypoxia-induced factor-1 alpha (HIF-1α) regulates the expression of genes that help cells adapt to this environment . Studies have indicated that HIF-1α is overexpressed in AME, suggesting that hypoxia is related to proliferation and invasion of the solid areas of this tumour . It has been shown that HIF-1α binds directly to the LIN28A promoter and induces its transcription and that hypoxia is capable of inducing the expression of stem cell markers in cancer cell lines, thereby contributing to the dedifferentiation and reprogramming process that induces the formation of cancer stem cells . From this, we can infer that the expression of LIN28A in the central cells of the AME tumour island close to the high columnar cells in the periphery may be associated with the adaptive response of tumour cells to hypoxia, inducing the dedifferentiation of peripheral cells, and thus promoting greater proliferation and invasion. KLF4 is an essential transcription factor in the regulation of cellular processes (e.g. development, differentiation, proliferation, and apoptosis) and in the renewal of stem cells and maintenance of pluripotency . It has been used as a reprogramming factor for fibroblasts and odontoblasts in induced pluripotent stem cells along with LIN28A . In different cell types, KLF4 functions as both a tumour suppressor and an oncogene . Increased expression has been reported in human head and neck squamous cell carcinoma and is associated with poor prognosis and aggressiveness, corroborating its oncogenic role . In contrast, Land et al. found an association between high KLF4 expression and a favourable prognosis. Another study reported the role of KLF4 in oral squamous cell carcinoma, in which mechanisms of action were described as both tumour suppressors and oncogenes . Some scholars believe that the function of KLF4 as an oncogene or tumour suppressor is modulated by its complex interactions with several tumour microenvironments . In the present study, intense nuclear immunostaining for KLF4 was observed in AME, predominantly in the tall columnar cells located on the periphery of the tumour island. In the AME-hTERT strain, nuclear expression of KLF4 with mild cytoplasmic expression was observed. KLF4 is mainly located in the nucleus, but its cytoplasmic localisation has also been reported in prostate and oral cancers . Increased KLF4 nuclear expression has been associated with poor prognosis in patients with breast and head and neck cancer . However, another study suggested that the downregulation of KLF4 is associated with the progression of oral carcinoma . Considering the ambiguity of this transcription factor, further studies are required to assess the role of KLF4 in AME. The findings of the present study indicate that SALL4 and LIN28A may play a significant role in the biological behaviour of AME, suggesting a possible role for stem cells in the genesis and progression of AME. The KLF4 transcription factor plays a context-dependent role in carcinogenesis and may be up or downregulated in distinct types of cancer. Therefore, its role in AME needs to be better understood. However, considering its expression together with that of other studied proteins, we suggest its participation and interaction as an oncogene. Although these results are promising, mechanistic and in vivo studies are required to confirm these hypotheses and elucidate the underlying molecular mechanisms. Understanding these mechanisms may have significant implications for the diagnosis, prognosis, and treatment of AME, thus opening up new possibilities for personalised and effective therapies. To the best of our knowledge, this is the first study to evaluate the expression of SALL4, LIN28A, and KLF4 proteins in a benign odontogenic tumour. The study results verify the expression of these stem cell markers in AME neoplastic cells by IHC and in the AME-hTERT cell line by immunofluorescence, suggesting the possible participation of stem cells in the origin, progression, and recurrence of this tumour.
Skipping a pillar does not make for strong foundations:
9de97573-61eb-4937-b0b8-d9aef64dbd3b
10681527
Internal Medicine[mh]
Dose finding via understanding dose/exposure – response relationships and optimizing dose/schedule is fundamental to the development of drugs with narrow therapeutic margins. The US Food and Drug Administration's (FDA's) Project Optimus requires sponsors to explore and justify conclusions of an optimized dose/schedule in oncology. Explicit in this is the need to characterize dose/exposure response relationships for efficacy and toxicity. A dose is prescribed and so recommendations, in the absence of therapeutic drug monitoring, should be on dose. However, understanding the drug exposure required to inform this dose decision is important. When this analysis should be done has been debated, in early studies or in late development with more mature data on efficacy. Whether it is better to demonstrate efficacy before optimizing the dose/schedule, or to generate dose informative data from the first trial is still under debate. There are ethical concerns of potentially under‐exposing patients with a serious life‐threatening condition. In this context, there is not the concept of a “rescue therapy” that allows for lower dose levels to be fully investigated in other therapy areas. After deciding when to do the analysis, the next question is how to do the analysis. This is generally seen as a dose–response question with the assumption that a therapeutic index exists and can be seen via a shift in the dose–response curves between efficacy and safety. This shift is important but implicit in this is an assumption that the shape of the response is the same for both toxicity and efficacy (i.e., similar slope). At the efficacy 50% of maximal effect dose (ED 50 ), there may be little toxicity but if the curve for toxicity is steep (quite often seen) then by ED 80 for efficacy there may be increased occurrence of toxicity. Therefore, understanding all aspects of the dose–response shape is important. Furthermore, disease and other confounding factors limit our ability to determine these relationships. More importantly, intent to treat analysis will flatten the dose–response at the top end or even appear biphasic due to toxicity and other reasons for patient dropout. Such approaches also preclude the possibility of incorporating a wide range of doses by allowing patients to be moved to a higher or lower dose level. In addition, for certain drugs (e.g., check point inhibitors) a time dependence in drug clearance is observed, thus making a link between dose and response more complex. Taking the observation time into account therefore is an important factor when choosing a dose, which typical dose–response or exposure‐response analyses do not consider explicitly. In addition, dose frequency is rarely considered in such analyses. Key questions, such as: Which dose–response shape would be expected? How much statistical power is lost by a priori choosing from a range of functional forms for the dose response? These questions require a fundamental understanding of how pharmacokinetic/pharmacodynamic (PK/PD) feeds into dose response. In doing so, a library of functional forms would not need to be considered to characterize the dose–response. The modeling analysis would become hypothesis driven. Furthermore, analyses of drugs with the same or similar mechanism of action could be compared and knowledge translated forward to new compounds. Finally, an understanding of the time‐course of disease could be used to inform future modeling. We therefore argue that not just PKs, the first pillar of pharmacology, should be considered when exploring the link between dose and efficacy but also pillars 2 and 3 (target binding and pharmacology) as well as the inclusion of disease time course as a fourth pillar. Comparing drugs on total dose/concentrations is not sufficient: free drug in plasma should be used as a surrogate for free drug in tissue. After a brief review of the current dose–response literature, we will derive dose–response models for exponential, Mayneord and Bertalanffy growth laws, to illustrate how incorporating PK/PD and disease time‐course further illuminates the expected dose–response relationship. It will be shown that potency and the PK profile, including terminal half‐life, not only sets the location (the ED 50 ) of the curve but also the steepness (Hill coefficient) as well. There exist numerous reviews on PK/PD and dose–response modeling. , , , , Within the literature it appears that dose–response and exposure‐response modeling have become disconnected and to some degree in practice as well. The former directly addresses labeling recommendations, however, the latter, PK/PD modeling, is an important component of identifying the optimal dose. Quantification of drug effect is an important aspect of model‐informed drug discovery and development : compound selection and dose selection. Dose is informed by a combination of the PK and PD properties of a drug. Therefore, for the optimal dose and frequency of dosing to be understood, these factors need to be disentangled. First, thought should be given to the question that the study and analysis will address – again, prior consideration of what the model could be and how the modeling results will be interpreted are important. A well thought out dose‐(exposure) response analysis allows critical factors and covariates to be considered in the analysis. There are, of course, many complexities, especially in oncology. For example, within patient titration is an emerging strategy to improve tolerability. This has been seen with inducers of cell death, such as BCL2 inhibitors, as well as with T‐cell engagers where tumor lysis and the resulting cytokine storm can be life‐threatening. The opportunities for optimizing dose are demonstrated by the modeling in Stein et al. 2012 that led to a trial investigating everolimus dose titration. An example of the large variability in exposure at a given dose is for the drug erlotinib, where drug concentrations can span multiple orders of magnitude at the approved dose, “apparent clearance estimated to 4.85 ± 4.71 L/h, elimination half‐life to 21.86 ± 28.35 h, and apparent volume of distribution to 208 ± 133 L.” Given, that an exposure‐response is seen in certain disease settings, erlotinib would therefore benefit from a therapeutic drug monitoring approach. This example demonstrates how seeing a dose–response for the drug is likely to be very difficult. Typical phase II and III study designs in oncology usually use no more than two dose levels, which can potentially limit the ability to identify a dose‐ (or exposure) response relationship using these modeling tools. The dose‐response literature reviews exhibit a wide range of models, quadratic, linear, exponential, and maximum effect ( E max ). One first asks: Aren't they just parts of the same global curve? MCP‐MOD is an example of considering a library of functional forms and calculating contrasts to test for the presence of a dose response. This is a rational approach, however, there can be a subsequent loss of power due to the need to test multiple hypotheses ( https://www.fda.gov/media/99313/download ). Second, the assessment and optimization require a dose response for toxicity to be derived – bringing in an even greater number of choices. Thus, it would be preferable to understand the underlying PK/PD behind a potential dose‐response relationship. The PK half‐life, and its resulting impact on PD half‐life should be considered in the optimization of dose and schedule as well. These will inform on the likely accumulation of drug and effect over time after repeated administrations of drug which will influence the dose‐response relationship. Typically, dose–response modeling considers a sigmoidal‐shaped curve – or linear in the absence of saturation of effect. This curve shape has its origin in the Langmire binding isotherm with further formalization with the development of the operational model of agonism and the effects of antagonists on this system. However, these are effects right at the beginning of the pharmacological causal chain, so why would the dose‐response curve for efficacy, pillar 4 phenotypic changes, be expected to follow this trend as well? The only justification is that it models a bounded response with the curve plateauing asymptotically to a maximum effect – which is often observed. However, one important consideration is that these generic curves do not consider time – whereas trials will generate time‐dependent information. It is often the case that trial participants are not all assessed at the same time after the start of treatment. The apparent potency of a drug, if dose versus effect is plotted, is time‐dependent when there is a delay in the observed onset of drug action – the true underlying potency will remain unaltered. This might be due to slow distribution of drug into tumor, or slow “off rate” binding kinetics, however, these tend to operate on the order of minutes to hours. The third reason for a PK/PD time delay is due to the slow turnover of the biomarker – in this case, the tumor burden in the patient or mouse model. Those processes are operating on the order of weeks and so careful choice of the PK metric to compare to efficacy end points is important. Time series analysis should be performed when the system is not considered to be at steady‐state – important for early induction phase as well for intermittently dosed treatments. The effect of time will be considered in further detail below. There are many clinical modeling studies that include the analysis of time series of tumor burden. Unfortunately, very few of these contain a true dose‐response element – at least a dose range wide enough that dose dependency can be determined above a pairwise comparison. The power of bringing in time components was illustrated by Dickinson et al. where a single time series model is applied that significantly improves the precision of analysis, and so the power. There have been few attempts to bring time‐dependent effects on slow biomarkers into classic exposure response analyses but these tend to be very empirical and “area under the curve (AUC) driven.” Other examples of exposure response modeling approaches are available that do consider disease burden time series. There are some key principles that should be considered to ensure the modeling analysis will deliver what is expected. Modeling need not be complex, but it should reflect the key aspects of the biology, pharmacology, and experimental design. The following is generally obvious, however, many of the steps of model development are often implicit. Prior information, assumption‐setting, and validation are all important steps in model development. It is useful to take a step back and consider from first principles what is likely to be observed. The following section discusses key aspects that should be considered. First, we must define the question we wish to answer, and this will define what we wish to estimate from the data and therefore the end points and the analytical approach. The estimand will likely be the parameter values that provide the best model description of the data. However, what we wish to estimate might be derived from the model (e.g., the dose level that gives 90% of the maximum effect or whether there is an efficacy advantage to twice daily versus once daily dosing). Second, the analytical approach should then be translated mathematically to a model that will enable estimation of these key parameters – perhaps taking care to parameterize the model directly with the required estimands – for example, parameterize in terms of dose or concentration for 90% effect rather than derive from 50% effect level (ED 50 or EC 50 ) and the Hill slope. We shall show below how careful analysis at this step can reveal the data trends the model implicitly predicts. Consideration should be made of the statistical aspect of the model especially with reference to sources and levels of variability and potential covariates. Finally, this structural and statistical model are combined via computer coding and applied to the data. Biological considerations First, the nature of the disease in terms of its typical rate of progression and evolution needs to be accounted for. As discussed above, the effect observed is dependent on the time at which observations are made. Cancer is a complex disease with many contributing factors to the observed phenotype – including the rate at which tumors grow and metastasize. However, in all cases, trials record some measure of disease burden, tumor size, and Response Evaluation Criteria in Solid Tumors. Thus, consideration of the appropriate model structure for modeling disease progression , is required to correctly identify the PK/PD relationship and ensure the model is predictive. This will allow alternate dosing regimen to be considered prospectively. An important process to take into consideration is resistance. The source of resistance (or at least whether it is pre‐existing or emergent under treatment), whether it is reversible, and what impact it will have on the pharmacology of the drug –an alteration of E max or potency (EC 50 ) over time – should be considered. The second biological consideration is the relevant end point(s) to incorporate into the model. There are many end points (PKs, tumor size, progression‐free survival, disease control, and overall survival [OS]) that are measured in a clinical trial and the challenge is to choose those most relevant to the questions in hand. In many cases, there is not a direct target engagement biomarker. For example, kinase inhibitors where we can measure phosphorylation of substrate. The effects of DNA damage response inhibitors or checkpoint inhibitors can only be measured several steps down stream. This potentially limits our ability to quantify how well the mechanism is being tested and feed into dose optimization. Typically, tumor volume changes are considered for modeling. However, no publications exist that show across a wide range of randomized clinical trials with an OS difference that the metrics from such modeling endeavors fully capture the treatment effect observed on OS and satisfies the Prentice criteria for surrogate end points. Thus, can tumor response still be used as an early pharmacological biomarker to optimize dose? It certainly is the most accessible and data rich with time dependency that might allow dose and schedule dependence to be investigated. Pharmacological considerations Considering the causal chain of pharmacology, with the pillars imbedded in it, it is clear that PKs, mode of binding, and mechanism of action should be taken into account when characterizing the dose–response. PKs is a key consideration because this is the link between dose and the extent and duration of exposure of the body to the drug. If we are to incorporate a PK/PD relationship, then we must know what free drug concentrations are achieved in plasma and relevant tissues. The route of administration, and bioavailability, as well as the rate and extent of distribution will inform on this as will the clearance of the drug. Together these will predict the extent (maximum plasma concentration, AUC, trough plasma concentration) and duration (half‐life) of the drug exposure. We will see these are key parameters in the dose‐response relationships that we will derive below. Second, the binding characteristics and the anticipated pharmacology will inform the relationship between dose and effect. Is this an orthosteric or allosteric inhibitor? Is binding reversible or irreversible – and, if irreversible, what is the re‐synthesis rate of the target protein? The answers to these questions will provide insight into target occupancy over time. Moving to the third pillar: what is the mechanism? What is the cancer hallmark being targeted and how is the target involved in this process? How rapidly is this likely to respond? The answer to these questions will inform on how target occupancy is translated to effect and thus completing knowledge of the PK/PD relationship. Where is the location (EC 50 ) of this relationship? Is it likely to be a standard sigmoid or steep? Finally, is this a combination with another agent – either experimental or current standard of care? What is its mechanism of action? What is the hypothesized pharmacological interaction of these two treatments? A priori knowledge, perhaps in the form of preclinical studies, including quantitative target validation, combined with early clinical PK and biomarker data, should begin to answer the above questions. Other modalities – for example, T‐cell engagers, PROTACs with potentially biphasic concentration effect relationships, and ADCs with DAR dependencies (dose of ADC vs. payload) may appear more complex but will have similar considerations. Study design considerations The key design aspects to consider are the dose levels, frequency, and duration of dosing. Coupled with the time, or times, of end point assessment allows the proposed model to be simulated and therefore parameter estimation to be performed. There are other factors in the design that require consideration. Is a dose titration planned and how will this be conducted? Will patients be able to move to another dose level/treatment group and how will this decision be made? Will this and any other adaptions or dropouts introduce bias – and how will this be handled in the analysis? Recent publications , have carried out simulation studies and have found that these can be accounted for if underlying covariate effects are included. Finally, potential sources of variability and important baseline covariates should be carefully considered to make the analysis as broad as possible and so account for confounding effects. This is important for a major source of confounding: the impact of the disease on PKs and PDs. This clearly merits more than one dose level being explored so these can potentially be separated along with baseline covariates that will enable the disentangling of these relationships. First, the nature of the disease in terms of its typical rate of progression and evolution needs to be accounted for. As discussed above, the effect observed is dependent on the time at which observations are made. Cancer is a complex disease with many contributing factors to the observed phenotype – including the rate at which tumors grow and metastasize. However, in all cases, trials record some measure of disease burden, tumor size, and Response Evaluation Criteria in Solid Tumors. Thus, consideration of the appropriate model structure for modeling disease progression , is required to correctly identify the PK/PD relationship and ensure the model is predictive. This will allow alternate dosing regimen to be considered prospectively. An important process to take into consideration is resistance. The source of resistance (or at least whether it is pre‐existing or emergent under treatment), whether it is reversible, and what impact it will have on the pharmacology of the drug –an alteration of E max or potency (EC 50 ) over time – should be considered. The second biological consideration is the relevant end point(s) to incorporate into the model. There are many end points (PKs, tumor size, progression‐free survival, disease control, and overall survival [OS]) that are measured in a clinical trial and the challenge is to choose those most relevant to the questions in hand. In many cases, there is not a direct target engagement biomarker. For example, kinase inhibitors where we can measure phosphorylation of substrate. The effects of DNA damage response inhibitors or checkpoint inhibitors can only be measured several steps down stream. This potentially limits our ability to quantify how well the mechanism is being tested and feed into dose optimization. Typically, tumor volume changes are considered for modeling. However, no publications exist that show across a wide range of randomized clinical trials with an OS difference that the metrics from such modeling endeavors fully capture the treatment effect observed on OS and satisfies the Prentice criteria for surrogate end points. Thus, can tumor response still be used as an early pharmacological biomarker to optimize dose? It certainly is the most accessible and data rich with time dependency that might allow dose and schedule dependence to be investigated. Considering the causal chain of pharmacology, with the pillars imbedded in it, it is clear that PKs, mode of binding, and mechanism of action should be taken into account when characterizing the dose–response. PKs is a key consideration because this is the link between dose and the extent and duration of exposure of the body to the drug. If we are to incorporate a PK/PD relationship, then we must know what free drug concentrations are achieved in plasma and relevant tissues. The route of administration, and bioavailability, as well as the rate and extent of distribution will inform on this as will the clearance of the drug. Together these will predict the extent (maximum plasma concentration, AUC, trough plasma concentration) and duration (half‐life) of the drug exposure. We will see these are key parameters in the dose‐response relationships that we will derive below. Second, the binding characteristics and the anticipated pharmacology will inform the relationship between dose and effect. Is this an orthosteric or allosteric inhibitor? Is binding reversible or irreversible – and, if irreversible, what is the re‐synthesis rate of the target protein? The answers to these questions will provide insight into target occupancy over time. Moving to the third pillar: what is the mechanism? What is the cancer hallmark being targeted and how is the target involved in this process? How rapidly is this likely to respond? The answer to these questions will inform on how target occupancy is translated to effect and thus completing knowledge of the PK/PD relationship. Where is the location (EC 50 ) of this relationship? Is it likely to be a standard sigmoid or steep? Finally, is this a combination with another agent – either experimental or current standard of care? What is its mechanism of action? What is the hypothesized pharmacological interaction of these two treatments? A priori knowledge, perhaps in the form of preclinical studies, including quantitative target validation, combined with early clinical PK and biomarker data, should begin to answer the above questions. Other modalities – for example, T‐cell engagers, PROTACs with potentially biphasic concentration effect relationships, and ADCs with DAR dependencies (dose of ADC vs. payload) may appear more complex but will have similar considerations. The key design aspects to consider are the dose levels, frequency, and duration of dosing. Coupled with the time, or times, of end point assessment allows the proposed model to be simulated and therefore parameter estimation to be performed. There are other factors in the design that require consideration. Is a dose titration planned and how will this be conducted? Will patients be able to move to another dose level/treatment group and how will this decision be made? Will this and any other adaptions or dropouts introduce bias – and how will this be handled in the analysis? Recent publications , have carried out simulation studies and have found that these can be accounted for if underlying covariate effects are included. Finally, potential sources of variability and important baseline covariates should be carefully considered to make the analysis as broad as possible and so account for confounding effects. This is important for a major source of confounding: the impact of the disease on PKs and PDs. This clearly merits more than one dose level being explored so these can potentially be separated along with baseline covariates that will enable the disentangling of these relationships. PK/PD RELATIONSHIP DETERMINES THE SHAPE OF THE DOSE–RESPONSE RELATIONSHIP We will now consider a series of case studies of commonly used tumor growth laws and show how the considerations of time scale of disease, incorporation of the pharmacology of the treatment, and time of end point assessment result in a particular shaped dose–response curve. These derivations will include an expression for the AUC that is useful in contexts outside of oncology. Full derivations are given in Appendix . The utility of this modeling exercise is not just to illustrate how we might anticipate the shape of the dose–response curve, and so aid planning of studies, such as anticipating the required dose range and time(s) for end point assessment. These might also find application as K‐PD models using a theoretical one‐compartment model to drive a disease progression “PD” model. Case study: Exponential growth Consider an exponential growth process with a drug‐effect that reduces the rate of growth or, if the E max is sufficiently large, can reduce the size of the population: d V d t = V k − E max . C t E C 50 + C t where V is the tumor volume, k is the growth rate, and EC 50 is the drug concentration for 50% of maximal effect. The initial condition is: V 0 = V 0 and the dose‐dependent ( D ) PKs are described using a one‐compartment i.v. bolus dose model in terms of clearance (CL), and volume of distribution ( V d ): C t = D V d e − at , where the elimination rate constant a = CL V d . The solution to this ordinary differential equation (ODE) is V t = V 0 e kt E C 50 + D V d e − at E C 50 + D V d E max a a similar result has been reported before. For long time (for t > > 1/ a ): V t = V 0 e kt E C 50 E C 50 + D V d E max a Thus, for long time, a standard sigmoidal dose–response curve is predicted whose steepness is defined by the ratio of the maximum rate of effect and the washout rate of the drug (see Figure ). The steepness as determined by the PK half‐life is due to the increasing time over EC 50 with increasing doses. If a compound has a short half‐life (large a ), then an incremental increase in dose will result in incremental increases in the time above EC 50 . Conversely, for a compound with a long half‐life (small a ) the compound will go from being below EC 50 for the entire dosing period to exceeding EC 50 for the entire dosing period over a relatively narrow dose range – thus resulting in a steeper dose response. Notice also that the ED 50 , in this case is EC 50 V a / E max , so that not only potency and PKs but the E max determines the location of the curve on the dose axis. In the case of oncology, the sigmoidal relationship represents the apparent fraction of tumor left viable after treatment. At first sight, the above equation implies it is possible to shrink the tumor even if E max less than k , however, recall the above is a long‐time approximation and so an untreated tumor will have grown over this period. Figure confirms this for where E max less than k : at high doses the effect plateaus short of tumor shrinkage. Notice that for single timepoint measurement we cannot disentangle E max from the drug half‐life. Figure shows an example simulation where E max greater than k and so tumor shrinkage can occur. Notice for this single dose case the time of the tumor volume nadir is: t = − 1 a ln k . E C 50 . V d D E max − k which shows a logarithmic relationship with dose. This solution is only real in the case that D > E C 50 . V d E max k − 1 . Substituting into the time solution for tumor volume we obtain V nadir V 0 = E max k − 1 k E max E max − k E max D E C 50 V d k E max 1 + D E C 50 V d E max 1 a . Thus, an apparent sigmoidal relationship would appear for the best overall response. The observation of “AUC driven” effect is explained by the case where over the dose range considered EC 50 greater than D/V. Thus, by taking the Taylor series of the natural logarithm: and substituting back into the ODE solution, an exponential‐dose effect is observed that correlates with AUC: V t = V 0 e kt − E max a AUC It can be shown similarly that if the PK/PD relationship is steep with Hill coefficient n : d V d t = V k − E max . C t n E C 50 n + C t n then this relationship becomes V t = V 0 e kt E C 50 n + D V d n e − ant E C 50 n + D V d n E max an with similar asymptotic properties. Note that a steeper PK/PD relationship results in a potentially less steep dose response – because duration of near maximal effect is not as readily obtained even for longer half‐life drugs: the PK half‐life has taken on a “PD half‐life” due to the steepness of the PK/PD relationship. Now consider regular repeat dosing so that long‐term treatment effects can be modeled and the consequences of drug accumulation on this effect, including where dose fractionation is considered. We assume that a drug is dosed at a fixed dose level D every τ hours. For q.d. dosing τ = 24 h, for b.d. dosing τ = 12 h. The PK profile after N doses is described as: C t = D V d Σ i = 0 N − 1 e − a t − i τ The solution (see Figure for a time series plot), for long time, to this is: V t = V 0 e kt E C 50 E C 50 + D V d 1 1 − e − aτ E max N a Notice that the effect of per dose administration is E C 50 E C 50 + D V d 1 1 − e − aτ E max a Thus, going from acute to chronic treatment there will be an apparent reduction in the ED 50 by (approximately) the accumulation factor 1 − e − a τ . Figure shows a comparison of the dose‐response relationship for single and repeated daily administration as a function of drug half‐life. The impact of dose fractionation With this mathematical frame‐work dose and schedule are not two separate factors, therefore they can be integrated into a single, mechanistic curve. Consider a dose fractionation study comparing q.d. (N doses) versus b.d. (2N doses) dosing. Then, from a total daily dosing perspective, the predicted long‐term effects will be: E q . d . = E C 50 E C 50 + D V d 1 1 − e − 24 . a E max N a E b . d . = E C 50 E C 50 + D 2 V d 1 1 − e − 12 . a E max 2 N a Thus, b.d. would be more effective if E C 50 E C 50 + D V d 1 1 − e − 24 . a > E C 50 E C 50 + D 2 V d 1 1 − e − 12 . a 2 This holds if A F q . d . − A F b . d . A F b . d . 2 < D V d E C 50 where A F q . d . = 1 1 − e − 24 a A F b . d . = 1 1 − e − 12 a Which is always true, however the gains may be marginal as shown below especially for long half‐life drugs (see Figure ). Case study 2 of sub‐exponential processes: Mayneord's model of linear radial growth What if the disease progression is not exponential? We consider sub‐exponential growth models that are used in oncology. The Mayneord growth law is defined as: d V d t = k V 2 / 3 This has the solution: V T = k 3 T + V 0 1 / 3 3 . We can also incorporate a PK/PD effect in the model: d V d t = V 2 / 3 k − E max C t E C 50 + C t Notice here that [ E max ] = L.T −1 . With the PK model c(t) defined as before the dose response is V T = k 3 T + V 0 1 / 3 − E max 3 a ln E C 50 + D V d E C 50 + D V d e − aT 3 Note that [ E max /a ] = [L] and so the effect is the reduction of tumor radius over time. Note also, at larger time, the drug effect is E max 3 a ln 1 + D V d E C 50 , and so a log‐linear dose effect (similar but not identical to a sigmoid) would be observed. Similarly, a repeat dose relationship (over N doses τ time apart) is: V T = k 3 T + V 0 1 / 3 − E max 3 a Σ i = 1 N ln E C 50 + D V d 1 − e − aiτ 1 − e − aτ E C 50 + D V d e − aτ 1 − e − aiτ 1 − e − aτ 3 Figure shows a time series plot for this solution and Figure has a comparison of single and repeat dose–response relationships as a function of drug half‐life. Case study 3 of sub‐exponential processes: Bertalanffy As a final example, we examine the Bertalanffy model because this model allows for sub‐exponential growth, like the Mayneord model, but also has an explicit cell death k d , that allows for the tumor size to plateau. The governing equation for the Bertalanffy model d V d T = k V 2 / 3 − k d V The solution to which is: V t = k k d + V 0 1 3 − k k d e − k d 3 t 3 We consider the case where the drug effect is to slow cell proliferation. Bertalanffy with a drug‐dependent reduction in proliferation rate We consider the case is where the drug effect is applied to the proliferation component of the model: d V d T = k − E max C t E C 50 + C t V 2 / 3 − k d V In this case, the solution is: V t = k k d + V 0 1 3 − k k d − E max 3 a Σ n = 0 n = ∞ − 1 n n − k d 3 a + 1 D E C 50 V d n + 1 1 − e − a n − k d 3 a + 1 t e − k d 3 t 3 The infinite polynomial series is similar to that of the Taylor expansion for ln(1 + x ) and so a relationship similar to a log linear effect of dose emerges. Its similarity is dependent on the size of k d /3a. Unfortunately, due to its derivation, the above solution is only applicable for ( D/ EC 50 V d ) less than 1. A plot of the time behavior of this model is shown in Figure . Expressions for the response of the Bertalanffy model after repeated dosing can be derived with a similar logic to the exponential and Mayneord models – however, the resulting expressions will be significantly more complex. Consider an exponential growth process with a drug‐effect that reduces the rate of growth or, if the E max is sufficiently large, can reduce the size of the population: d V d t = V k − E max . C t E C 50 + C t where V is the tumor volume, k is the growth rate, and EC 50 is the drug concentration for 50% of maximal effect. The initial condition is: V 0 = V 0 and the dose‐dependent ( D ) PKs are described using a one‐compartment i.v. bolus dose model in terms of clearance (CL), and volume of distribution ( V d ): C t = D V d e − at , where the elimination rate constant a = CL V d . The solution to this ordinary differential equation (ODE) is V t = V 0 e kt E C 50 + D V d e − at E C 50 + D V d E max a a similar result has been reported before. For long time (for t > > 1/ a ): V t = V 0 e kt E C 50 E C 50 + D V d E max a Thus, for long time, a standard sigmoidal dose–response curve is predicted whose steepness is defined by the ratio of the maximum rate of effect and the washout rate of the drug (see Figure ). The steepness as determined by the PK half‐life is due to the increasing time over EC 50 with increasing doses. If a compound has a short half‐life (large a ), then an incremental increase in dose will result in incremental increases in the time above EC 50 . Conversely, for a compound with a long half‐life (small a ) the compound will go from being below EC 50 for the entire dosing period to exceeding EC 50 for the entire dosing period over a relatively narrow dose range – thus resulting in a steeper dose response. Notice also that the ED 50 , in this case is EC 50 V a / E max , so that not only potency and PKs but the E max determines the location of the curve on the dose axis. In the case of oncology, the sigmoidal relationship represents the apparent fraction of tumor left viable after treatment. At first sight, the above equation implies it is possible to shrink the tumor even if E max less than k , however, recall the above is a long‐time approximation and so an untreated tumor will have grown over this period. Figure confirms this for where E max less than k : at high doses the effect plateaus short of tumor shrinkage. Notice that for single timepoint measurement we cannot disentangle E max from the drug half‐life. Figure shows an example simulation where E max greater than k and so tumor shrinkage can occur. Notice for this single dose case the time of the tumor volume nadir is: t = − 1 a ln k . E C 50 . V d D E max − k which shows a logarithmic relationship with dose. This solution is only real in the case that D > E C 50 . V d E max k − 1 . Substituting into the time solution for tumor volume we obtain V nadir V 0 = E max k − 1 k E max E max − k E max D E C 50 V d k E max 1 + D E C 50 V d E max 1 a . Thus, an apparent sigmoidal relationship would appear for the best overall response. The observation of “AUC driven” effect is explained by the case where over the dose range considered EC 50 greater than D/V. Thus, by taking the Taylor series of the natural logarithm: and substituting back into the ODE solution, an exponential‐dose effect is observed that correlates with AUC: V t = V 0 e kt − E max a AUC It can be shown similarly that if the PK/PD relationship is steep with Hill coefficient n : d V d t = V k − E max . C t n E C 50 n + C t n then this relationship becomes V t = V 0 e kt E C 50 n + D V d n e − ant E C 50 n + D V d n E max an with similar asymptotic properties. Note that a steeper PK/PD relationship results in a potentially less steep dose response – because duration of near maximal effect is not as readily obtained even for longer half‐life drugs: the PK half‐life has taken on a “PD half‐life” due to the steepness of the PK/PD relationship. Now consider regular repeat dosing so that long‐term treatment effects can be modeled and the consequences of drug accumulation on this effect, including where dose fractionation is considered. We assume that a drug is dosed at a fixed dose level D every τ hours. For q.d. dosing τ = 24 h, for b.d. dosing τ = 12 h. The PK profile after N doses is described as: C t = D V d Σ i = 0 N − 1 e − a t − i τ The solution (see Figure for a time series plot), for long time, to this is: V t = V 0 e kt E C 50 E C 50 + D V d 1 1 − e − aτ E max N a Notice that the effect of per dose administration is E C 50 E C 50 + D V d 1 1 − e − aτ E max a Thus, going from acute to chronic treatment there will be an apparent reduction in the ED 50 by (approximately) the accumulation factor 1 − e − a τ . Figure shows a comparison of the dose‐response relationship for single and repeated daily administration as a function of drug half‐life. The impact of dose fractionation With this mathematical frame‐work dose and schedule are not two separate factors, therefore they can be integrated into a single, mechanistic curve. Consider a dose fractionation study comparing q.d. (N doses) versus b.d. (2N doses) dosing. Then, from a total daily dosing perspective, the predicted long‐term effects will be: E q . d . = E C 50 E C 50 + D V d 1 1 − e − 24 . a E max N a E b . d . = E C 50 E C 50 + D 2 V d 1 1 − e − 12 . a E max 2 N a Thus, b.d. would be more effective if E C 50 E C 50 + D V d 1 1 − e − 24 . a > E C 50 E C 50 + D 2 V d 1 1 − e − 12 . a 2 This holds if A F q . d . − A F b . d . A F b . d . 2 < D V d E C 50 where A F q . d . = 1 1 − e − 24 a A F b . d . = 1 1 − e − 12 a Which is always true, however the gains may be marginal as shown below especially for long half‐life drugs (see Figure ). With this mathematical frame‐work dose and schedule are not two separate factors, therefore they can be integrated into a single, mechanistic curve. Consider a dose fractionation study comparing q.d. (N doses) versus b.d. (2N doses) dosing. Then, from a total daily dosing perspective, the predicted long‐term effects will be: E q . d . = E C 50 E C 50 + D V d 1 1 − e − 24 . a E max N a E b . d . = E C 50 E C 50 + D 2 V d 1 1 − e − 12 . a E max 2 N a Thus, b.d. would be more effective if E C 50 E C 50 + D V d 1 1 − e − 24 . a > E C 50 E C 50 + D 2 V d 1 1 − e − 12 . a 2 This holds if A F q . d . − A F b . d . A F b . d . 2 < D V d E C 50 where A F q . d . = 1 1 − e − 24 a A F b . d . = 1 1 − e − 12 a Which is always true, however the gains may be marginal as shown below especially for long half‐life drugs (see Figure ). What if the disease progression is not exponential? We consider sub‐exponential growth models that are used in oncology. The Mayneord growth law is defined as: d V d t = k V 2 / 3 This has the solution: V T = k 3 T + V 0 1 / 3 3 . We can also incorporate a PK/PD effect in the model: d V d t = V 2 / 3 k − E max C t E C 50 + C t Notice here that [ E max ] = L.T −1 . With the PK model c(t) defined as before the dose response is V T = k 3 T + V 0 1 / 3 − E max 3 a ln E C 50 + D V d E C 50 + D V d e − aT 3 Note that [ E max /a ] = [L] and so the effect is the reduction of tumor radius over time. Note also, at larger time, the drug effect is E max 3 a ln 1 + D V d E C 50 , and so a log‐linear dose effect (similar but not identical to a sigmoid) would be observed. Similarly, a repeat dose relationship (over N doses τ time apart) is: V T = k 3 T + V 0 1 / 3 − E max 3 a Σ i = 1 N ln E C 50 + D V d 1 − e − aiτ 1 − e − aτ E C 50 + D V d e − aτ 1 − e − aiτ 1 − e − aτ 3 Figure shows a time series plot for this solution and Figure has a comparison of single and repeat dose–response relationships as a function of drug half‐life. As a final example, we examine the Bertalanffy model because this model allows for sub‐exponential growth, like the Mayneord model, but also has an explicit cell death k d , that allows for the tumor size to plateau. The governing equation for the Bertalanffy model d V d T = k V 2 / 3 − k d V The solution to which is: V t = k k d + V 0 1 3 − k k d e − k d 3 t 3 We consider the case where the drug effect is to slow cell proliferation. Bertalanffy with a drug‐dependent reduction in proliferation rate We consider the case is where the drug effect is applied to the proliferation component of the model: d V d T = k − E max C t E C 50 + C t V 2 / 3 − k d V In this case, the solution is: V t = k k d + V 0 1 3 − k k d − E max 3 a Σ n = 0 n = ∞ − 1 n n − k d 3 a + 1 D E C 50 V d n + 1 1 − e − a n − k d 3 a + 1 t e − k d 3 t 3 The infinite polynomial series is similar to that of the Taylor expansion for ln(1 + x ) and so a relationship similar to a log linear effect of dose emerges. Its similarity is dependent on the size of k d /3a. Unfortunately, due to its derivation, the above solution is only applicable for ( D/ EC 50 V d ) less than 1. A plot of the time behavior of this model is shown in Figure . Expressions for the response of the Bertalanffy model after repeated dosing can be derived with a similar logic to the exponential and Mayneord models – however, the resulting expressions will be significantly more complex. We consider the case is where the drug effect is applied to the proliferation component of the model: d V d T = k − E max C t E C 50 + C t V 2 / 3 − k d V In this case, the solution is: V t = k k d + V 0 1 3 − k k d − E max 3 a Σ n = 0 n = ∞ − 1 n n − k d 3 a + 1 D E C 50 V d n + 1 1 − e − a n − k d 3 a + 1 t e − k d 3 t 3 The infinite polynomial series is similar to that of the Taylor expansion for ln(1 + x ) and so a relationship similar to a log linear effect of dose emerges. Its similarity is dependent on the size of k d /3a. Unfortunately, due to its derivation, the above solution is only applicable for ( D/ EC 50 V d ) less than 1. A plot of the time behavior of this model is shown in Figure . Expressions for the response of the Bertalanffy model after repeated dosing can be derived with a similar logic to the exponential and Mayneord models – however, the resulting expressions will be significantly more complex. This review has highlighted the need to use models to explore scenarios and so generate hypotheses on optimal dose and schedule, as well as experimental design. The modeling need not be complex, but should reflect the key aspects of the biology, pharmacology, and experimental design to ensure the inference is relevant. In developing such a model, biological and pharmacological priors in the analyses can be included. In addition, simplicity allows for the more robust application of nonlinear mixed‐effects in the analysis. Taking a step back and considering from first principles what we would expect to observe is important. The analysis, incorporating pillars of pharmacology (PK, PD, and disease progression) has shown that the shape of the dose–response relationship is dependent upon the underlying disease progression dynamics, the times of end point assessment, and the mechanism informed PK/PD relationship. The simulations in the figures demonstrate that the range of dose levels, dosing frequencies, and times for end point assessment can be selected with consideration to prior mechanistic knowledge. Therefore, a consideration of all the pillars of pharmacology will provide a strong foundation for trial design and interpretation by anticipating the dose and time dependence of response. Historically there has been a gulf between very empirical PK/PD modeling and mechanistic insight. Is this gulf the driver behind the interest in Quantitative Systems Pharmacology modeling? The authors believe that these approaches are part of a modeling continuum, and it should be the case that the appropriate (modeling) tool is used for the job in hand: therefore, could these approaches meet somewhere in the middle? The analyses highlight explicitly that the location and the shape of the dose‐response relationship is dependent not only on the potency and other pharmacological considerations but also the pharmacokinetics of the drug. This observation is to be naturally expected, however, it is worth making this explicit: Dose response can change across different populations due to PK changes – terminal half‐life as well as potency differences. Dose response can change across species due to differences in PKs – this has implications for the extrapolation of doses tested in pivotal toxicology studies to human starting and maximally safe doses. PK/PD plus predictions of human PK is a much more sensible approach. Dose response can alter dependent upon the time of endpoint assessment and whether this varies between trial participants. The analyses do omit two important concepts in oncology. Namely drug resistance and the combination of drugs to counter this. In a very simple manner, resistance can be factored into the above models by including a time‐dependent reduction in potency (EC 50 ) or efficacy ( E max ) in the repeated dose case. Which of these is most appropriate is dependent upon the mechanism – whether it is an adaption of the whole cell population (EC 50 ) or emergence of completely resistant clones ( E max ). Combinations might require solving the ODEs directly to incorporate the different PK and PD properties of the treatments. However, as a simple comparison of, for example, standard of care versus standard of care plus novel therapeutic, a consideration as to whether the combination alters potency or E max would enable the above derivations to be used. No funding was received for this work. The authors declared no competing interests for this work. Appendix S1 Click here for additional data file.
Clinical profile and outcomes of rejection after deep anterior lamellar keratoplasty in keratoconus
cd2be935-f012-49d8-b9d7-7026a8287fda
11831948
Surgical Procedures, Operative[mh]
This is a retrospective study of six eyes of six patients who were diagnosed with rejection episode after DALK between 2019 and 2023. During this period, 487 eyes had undergone DALK. The demographics, clinical features, treatment, and outcomes after rejection were studied. The demographic data and clinical details such as age, history of allergic eye disease, indication of surgery, coexisting cornea vascularization (if any), donor details, surgical factors like graft size and centration, type of surgery (manual vs. big-bubble DALK), frequency of steroid usage, and status of sutures in situ at the time of rejection were analyzed. Routine postoperative care after DALK involved prednisolone acetate 1% in three-hourly frequencies for the first week or 2 weeks, followed by weekly taper to twice daily, which continued till all the sutures were removed. In the event of steroid-related raised intraocular pressure, loteprednol etabonate 1% was used instead of prednisolone acetate. The general practice pattern for management of an acute rejection episode consisted of topical prednisolone acetate 1% every hourly, which was gradually tapered over 6–8 weeks to twice daily. Resolution of rejection episode was defined by the resolution of graft haze, quiet appearance of conjunctiva, and regression of graft neovascularization. summarizes the demographics and clinical details of the six patients. – show the slit-lamp photographs of the six eyes. Demographics The median age of the patients was 24 years (range 18–44 years), and there was an equal number of males and females. The indications for surgery were advanced keratoconus with poor contact lens tolerance and subjectively inadequate spectacle corrected distance visual acuity (CDVA) in all eyes. Of the six patients, four had concomitant allergic eye disease and one had lid margin disease. The left eye was affected in four patients. Median follow-up from the time of surgery was 2.5 (range 1–7) years. Donor and surgical details Five eyes had type 1 big-bubble DALK and one eye had manual dissection DALK. All surgeries were uneventful. Sixteen interrupted sutures were placed during surgery in all eyes. The donor graft size ranged from 8.25 to 8.75 mm. The grafts were oversized by 0.25 mm in each case. One graft was eccentric to accommodate peripheral area of thinning in the host. The donor corneas were preserved in either Cornisol or McCarey-Kaufman preservation solution for up to 1–3 days. Median age of the donors was 41 (range 35–83) years. Donor–recipient gender mismatch was noted in two patients. Clinical features of rejection Median time of rejection from the day of surgery was 12 (range 3–36) months. Median duration of symptoms was 4 (range 2–28) days. Of the six patients, five complained of mild ocular discomfort, redness, photophobia, and reduced vision. One patient (Patient 2) had excruciating pain and discomfort. The clinical signs in all eyes included congestion and haze localized to the graft. Patient 2, who complained of severe pain, had intense congestion, graft infiltration, and melt localized to the graft, which resembled microbial keratitis [ b]. Although there was no anterior chamber reaction and lid edema, a complete microbiologic work-up was performed to rule out an infective etiology. Scrapings from the site of graft were sent for smears and cultures for bacteria, fungi, parasitic infections, and virology studies. The smears did not reveal any organism, and cultures were sterile. In view of the melt, tissue adhesive was applied over the graft–host junction (1b). Two patients were using topical steroids, although inconsistently, while four had discontinued steroids. One patient (Patient 4) developed two episodes of rejection 3 months apart [ c]. This patient had stopped steroids at 1 month after the first episode of graft rejection, which could have been the possible risk factor for the second episode. Management and outcomes Five patients were treated with prednisolone acetate 1% hourly initially, with gradual tapering over weeks, until twice-a-day dosing was maintained. One patient (Patient 2) was initially treated with intensive antibiotic therapy (fortified cefazoline and ciprofloxacin) for 2 days and the steroids were instituted only after microbiologic work-up was inconclusive. Intravenous methylprednisolone therapy at a dose of 1 g given as a single dose was additionally administered in one patient (Patient 1) in view of marked drop in vision due to diffuse graft haze. All except one (Patient 2) responded favorably to the treatment regimen with restoration of graft clarity. Patient 2 was responding to topical steroids, but developed superadded fungal infection at 2-months follow-up [ c], which resolved with topical antifungal medications and developed scarring in the graft [ d]. Excluding the patient who developed scarring following secondary fungal keratitis, the median best corrected visual acuity in five patients was restored to 20/30 (range 20/30–20/50) at 3–4 months post-rejection, which was maintained till the last follow-up duration. In one patient (Patient 4), lipid deposition was seen in the mid-stroma of the graft along ghost vasculature after the second episode of rejection at 3 months [ d]. The intraocular pressure was recorded in the normal range in all eyes at all visits, and fundus examination revealed a healthy disc with a normal cup–disc ratio. The median age of the patients was 24 years (range 18–44 years), and there was an equal number of males and females. The indications for surgery were advanced keratoconus with poor contact lens tolerance and subjectively inadequate spectacle corrected distance visual acuity (CDVA) in all eyes. Of the six patients, four had concomitant allergic eye disease and one had lid margin disease. The left eye was affected in four patients. Median follow-up from the time of surgery was 2.5 (range 1–7) years. Five eyes had type 1 big-bubble DALK and one eye had manual dissection DALK. All surgeries were uneventful. Sixteen interrupted sutures were placed during surgery in all eyes. The donor graft size ranged from 8.25 to 8.75 mm. The grafts were oversized by 0.25 mm in each case. One graft was eccentric to accommodate peripheral area of thinning in the host. The donor corneas were preserved in either Cornisol or McCarey-Kaufman preservation solution for up to 1–3 days. Median age of the donors was 41 (range 35–83) years. Donor–recipient gender mismatch was noted in two patients. Median time of rejection from the day of surgery was 12 (range 3–36) months. Median duration of symptoms was 4 (range 2–28) days. Of the six patients, five complained of mild ocular discomfort, redness, photophobia, and reduced vision. One patient (Patient 2) had excruciating pain and discomfort. The clinical signs in all eyes included congestion and haze localized to the graft. Patient 2, who complained of severe pain, had intense congestion, graft infiltration, and melt localized to the graft, which resembled microbial keratitis [ b]. Although there was no anterior chamber reaction and lid edema, a complete microbiologic work-up was performed to rule out an infective etiology. Scrapings from the site of graft were sent for smears and cultures for bacteria, fungi, parasitic infections, and virology studies. The smears did not reveal any organism, and cultures were sterile. In view of the melt, tissue adhesive was applied over the graft–host junction (1b). Two patients were using topical steroids, although inconsistently, while four had discontinued steroids. One patient (Patient 4) developed two episodes of rejection 3 months apart [ c]. This patient had stopped steroids at 1 month after the first episode of graft rejection, which could have been the possible risk factor for the second episode. Five patients were treated with prednisolone acetate 1% hourly initially, with gradual tapering over weeks, until twice-a-day dosing was maintained. One patient (Patient 2) was initially treated with intensive antibiotic therapy (fortified cefazoline and ciprofloxacin) for 2 days and the steroids were instituted only after microbiologic work-up was inconclusive. Intravenous methylprednisolone therapy at a dose of 1 g given as a single dose was additionally administered in one patient (Patient 1) in view of marked drop in vision due to diffuse graft haze. All except one (Patient 2) responded favorably to the treatment regimen with restoration of graft clarity. Patient 2 was responding to topical steroids, but developed superadded fungal infection at 2-months follow-up [ c], which resolved with topical antifungal medications and developed scarring in the graft [ d]. Excluding the patient who developed scarring following secondary fungal keratitis, the median best corrected visual acuity in five patients was restored to 20/30 (range 20/30–20/50) at 3–4 months post-rejection, which was maintained till the last follow-up duration. In one patient (Patient 4), lipid deposition was seen in the mid-stroma of the graft along ghost vasculature after the second episode of rejection at 3 months [ d]. The intraocular pressure was recorded in the normal range in all eyes at all visits, and fundus examination revealed a healthy disc with a normal cup–disc ratio. DALK has several advantages over penetrating keratoplasty (PK); however, the risk of epithelial and stromal rejection persists. Most of the rejection episodes occurring after PK begin present as graft edema with keratic precipitates, primarily due to endothelial involvement. Loss of graft clarity due to failure from endothelial rejection after PK predominantly results in edema that can be managed later with endothelial keratoplasty with good outcomes. However, the rejection after DALK can lead to a loss of transparency due to stromal haze, which may not recover completely if management is delayed. The loss of stromal clarity may also be affected severely due to development of neovascularization in the graft. It appears from various studies that the overall rate of rejection in DALK may be similar to that reported after PK. The clinical features and risk factors of rejection after DALK have been described; however, the literature on rejection after DALK is relatively sparse compared to rejection after PK. This is likely due to relatively smaller proportions of DALK when compared to PK and posterior lamellar keratoplasty. Hence, more descriptive studies on the clinical profile and management of rejection episodes in DALK will add crucial and incremental information on identifying the varied clinical presentation and outcomes of rejection after DALK in different indications. Previous studies describe that graft rejection after DALK is characterized by acute stromal haze, Descemet membrane folds, and vascularization, which can increase in extent if left untreated. Although graft recovery following treatment of rejection episodes has been documented to be favorable, risk of graft failure due to loss of graft transparency exists. Poor compliance to steroids and allergic eye diseases have been associated as the predisposing risk factors for rejection after DALK. Majority of rejections have occurred within the first year of the graft, although few reports indicate its occurrence even later at 3.5 years. In this study, all six eyes with rejection had DALK performed for keratoconus. The potential risk factors that were identified were vernal keratoconjunctivitis in four eyes (Patient 1, 2, 3, and 4), lid margin disease in one eye (Patient 5), eccentric graft in one eye (Patient 6), and poor compliance to steroid instillation in all eyes. We made the diagnosis of graft rejection based on previously elucidated features of stromal graft rejection and excluding the other differentials such as exacerbation of the comorbid clinical condition (Vernal keratoconjunctivitis (VKC) in four patients and lid margin disease in one patient). The symptoms of unilateral affliction of discomfort, photophobia, congestion, drop in vision restricted to the grafted eye, along with clinical features of vascularization at the site of sutures, cellular infiltration, and haze in the graft favored a diagnosis of stromal graft rejection. An exacerbation of allergy is typically characterized by the symptom of itching, which was not the predominant symptom complained by any of the patients. Three eyes had rejection within the first year (at 3 and 6 months), while three eyes had it later (at 16 months in two eyes and 38 months in one eye). Four eyes had all intact sutures, while two eyes had sutures out at the time of presentation. The patient who had rejection at 38 months had all sutures out, except two that remained partly embedded in the deep stroma. Similar to PK, sutures are potential trigger factors for rejection episodes in DALK. In addition, persistence of graft keratocytes for many years after DALK has also been proposed as a pathogenic mechanism for delayed rejections. This highlights the importance of compliance to steroids in DALK, when sutures are present and for some years beyond complete suture removal in proinflammatory conditions such as keratoconus associated with concomitant allergy. In our patients, the clinical features involved a variable degree of graft haze/opacification, congested conjunctiva, and vascularization at the graft–host junction and suture bed. In one patient (Patient 2), there was a marked infiltration and melt of the graft that mimicked microbial keratitis. Such fulminant clinical presentation due to a rejection episode after DALK has not been described in previous reports. The relative paucity of anterior chamber reaction, melt localized to the graft periphery, absence of lid edema, and nonconclusive microbiology can help in making an appropriate diagnosis, as was performed in this patient. The mainstay of therapy of rejection after DALK has been topical steroids, although the use of intravenous methylprednisolone at the initial presentation has been advocated by some authors. The response to intensive topical steroid therapy was satisfactory in all except patient 2, who developed fungal keratitis and scarring in the graft in the follow-up period. In one patient, intravenous methylprednisolone was instituted in view of severe graft edema, in addition to frequent topical steroids. It is uncertain if the duration of symptoms has any influence on the recovery of graft transparency. In one study, the graft recovered well even with a prolonged duration of symptoms of 13 days. In our study, the one patient who had a prolonged history of 28 days had good restoration of the graft clarity on frequent treatment with steroids. In another patient, there was a second episode of rejection on stopping steroids, which resolved with steroids, although neovascularization and lipid deposition were observed later as sequelae. This suggests that rejection episodes, if left untreated or inadequately treated, can lead to loss of clarity of the graft due to secondary changes such as vascularization and lipid deposition and can have loss of lines of vision. Based on the observations made in the present study, the suggested protocols for management of rejection are frequent usage of topical steroids initially, intravenous steroids in cases of severe rejection, and trial of steroids despite a seemingly long duration of history of clinical signs and symptoms. In addition, compliance to steroids is important even in the later years of postoperative period after DALK, as the risk of rejection exists. With prompt diagnosis and appropriate treatment, restoration of graft transparency is favorable. Stromal rejection after DALK in keratoconus can can occur on sudden cessation of topical steroids. Prompt recognition of clinical signs and symptoms with timely management is important in quick reversal of the rejection episode. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
开放性实验:基于超高效液相色谱-串联质谱法的甲状腺癌组织切片蛋白质组定量分析
3e0a31d2-4d69-4b01-a873-71452bc1f9fe
11883525
Biochemistry[mh]
1.1 仪器、试剂与材料 Ultimate 3000 RPLC高效液相色谱系统、Q Exactive HF-X高分辨质谱仪均购自Thermo Fisher公司;scientz-IID细胞超声破碎仪购自新芝生物公司。 十二烷基硫酸钠(SDS, ≥98.5%)、尿素(urea, ≥99%)、盐酸胍(GuHCl, ≥99%)、三羧基乙基膦(TCEP, ≥98.5%)、碘乙酰胺(IAA, ≥99%)、碳酸氢铵(≥99.5%)、甲酸(FA, ≥96%)、磷酸盐缓冲溶液(1×PBS)均购于Sigma-Aldrich公司,胰蛋白酶(质谱级)购自Promega公司,色谱纯乙腈购自Merck公司。 1.2 蛋白质组样品预处理 1.2.1 细胞破碎 将Hela细胞分别重悬于200 μL的40 g/L SDS、6 mol/L盐酸胍、8 mol/L尿素或1×PBS缓冲液中,将样品放置于冰上。利用细胞超声破碎仪对细胞超声破碎。超声条件如下:5 s开,5 s关,反复操作,总时间1 min,能量40 W。在利用不同细胞裂解液破碎细胞时,用甲醇和去离子水交叉清洗超声探头3次,避免样品间的交叉污染。超声后的溶液在16000 g下离心20 min,去除沉降的不溶物,收集上清液即得到从细胞中提取的蛋白质溶液。 1.2.2 蛋白质组样品预处理 在蛋白质溶液中加入终浓度为10 mmol/L的TCEP,混匀后在95 ℃反应10 min使蛋白质变性还原。放至室温后,加入终浓度为20 mmol/L的IAA,在室温避光反应30 min。采取基于滤膜辅助的蛋白质组样品预处理方法完成蛋白质组样品处理 。具体如下:将样品转移到截留分子质量为10000 Da的滤膜上,在16000 g下离心40 min。用200 μL 8 mol/L尿素和50 mmol/L碳酸氢铵溶液分别清洗滤膜3次。按照酶和蛋白质的质量比1∶20加入胰蛋白酶,在37 ℃下反应16 h。在16000 g下离心40 min回收酶解得到的肽段,用30 μL 50 mmol/L碳酸氢铵溶液清洗滤膜两次,离心后与肽段合并。冻干后用0.1%(v/v) FA水溶液复溶后冻存于-80 ℃,待UHPLC-MS/MS分析,以上各组实验均重复3次进行。 1.3 组织切片样品处理 甲状腺肿和甲状腺乳头状癌(T1N1M0)患者FFPE样品来源于大连医科大学附属第二医院,样品经过伦理委员会验证(大医二院伦快审2020第057号)。将切片样品放于50 mL烧杯中,加入二甲苯溶液浸没组织,共浸泡15 min。再分别用无水乙醇、70%(v/v)乙醇水溶液、56%(v/v)乙醇水溶液各浸泡组织两次,每次10 min,以完成石蜡包埋的组织切片样品的脱蜡水合。之后,用60 μL的40 g/L SDS提取蛋白质,并完成蛋白质组样品预处理。 1.4 UHPLC-MS/MS分析 流动相A相为含0.1%(v/v)FA的2%(v/v)乙腈水溶液,B相为含0.1%(v/v)FA的80%(v/v)乙腈水溶液。将肽段上样于Acclaim PePMap RSLC C18色谱柱(250 mm×75 μm, 2 μm)进行分离,流速为300 nL/min。 1.4.1 液相色谱流动相梯度优化 方法一:总梯度洗脱时间为50 min。0~30 min, 5%B~30%B; 30~35 min, 30%B~50%B; 35~38 min, 50%B~95%B; 38~43 min, 95%B; 43~44 min, 95%B~5%B; 44~50 min, 5%B。 方法二:总梯度洗脱时间为85 min。0~60 min, 5%B~30%B; 60~70 min, 30%B~50%B; 70~71 min, 50%B~95%B; 71~75 min, 95%B; 75~76 min, 95%B~5%B; 76~85 min, 5%B。 方法三:总梯度洗脱时间为130 min。0~5 min, 5%B~8%B; 5~105 min, 8%B~30%B; 105~120 min, 30%B~50%B; 120~121 min, 50%B~90%B; 121~125 min, 90%B; 125~126 min, 90%B~5%B; 126~130 min, 5%B。 1.4.2 质谱采集条件 在正离子数据依赖(DDA)模式下采集数据。一级质谱采集质量范围为350~2000 Da,分辨率为60000,自动增益控制(AGC)值为3×10 6 ,最大离子注入时间设置为30 ms。母离子通过27%的碰撞能量进行碎裂,采集电荷为+2~+7价态肽段。二级质谱扫描在15000分辨率下进行,AGC设置为5×10 4 ,分离窗口为 m/z 1.6,动态排除时间设置为45 s。 1.5 数据分析 利用Proteome Discoverer(PD)软件(版本2.4)分析质谱采集的数据。采用2021年2月2日在uniprot库中下载的人源数据库与数据进行匹配。一级母离子和二级碎裂离子的质量兼容度分别为1.0×10 -5 (10 ppm)和2.0×10 -5 (20 ppm)。设置胰蛋白酶为特异性酶切,最多允许2个漏切位点。将蛋白质N-末端的乙酰化(+42.0150 Da)和甲硫氨酸的氧化(+15.9949 Da)设置为可变修饰,半胱氨酸上的烷基化(+71.0371 Da)设置为固定修饰。控制肽段和蛋白质鉴定假阳性率(FDR)≤ 1%。利用MaxQuant软件(版本1.6.5.0)对质谱采集的数据进行无标记定量分析。搜库条件与PD软件设置保持一致。 Ultimate 3000 RPLC高效液相色谱系统、Q Exactive HF-X高分辨质谱仪均购自Thermo Fisher公司;scientz-IID细胞超声破碎仪购自新芝生物公司。 十二烷基硫酸钠(SDS, ≥98.5%)、尿素(urea, ≥99%)、盐酸胍(GuHCl, ≥99%)、三羧基乙基膦(TCEP, ≥98.5%)、碘乙酰胺(IAA, ≥99%)、碳酸氢铵(≥99.5%)、甲酸(FA, ≥96%)、磷酸盐缓冲溶液(1×PBS)均购于Sigma-Aldrich公司,胰蛋白酶(质谱级)购自Promega公司,色谱纯乙腈购自Merck公司。 1.2.1 细胞破碎 将Hela细胞分别重悬于200 μL的40 g/L SDS、6 mol/L盐酸胍、8 mol/L尿素或1×PBS缓冲液中,将样品放置于冰上。利用细胞超声破碎仪对细胞超声破碎。超声条件如下:5 s开,5 s关,反复操作,总时间1 min,能量40 W。在利用不同细胞裂解液破碎细胞时,用甲醇和去离子水交叉清洗超声探头3次,避免样品间的交叉污染。超声后的溶液在16000 g下离心20 min,去除沉降的不溶物,收集上清液即得到从细胞中提取的蛋白质溶液。 1.2.2 蛋白质组样品预处理 在蛋白质溶液中加入终浓度为10 mmol/L的TCEP,混匀后在95 ℃反应10 min使蛋白质变性还原。放至室温后,加入终浓度为20 mmol/L的IAA,在室温避光反应30 min。采取基于滤膜辅助的蛋白质组样品预处理方法完成蛋白质组样品处理 。具体如下:将样品转移到截留分子质量为10000 Da的滤膜上,在16000 g下离心40 min。用200 μL 8 mol/L尿素和50 mmol/L碳酸氢铵溶液分别清洗滤膜3次。按照酶和蛋白质的质量比1∶20加入胰蛋白酶,在37 ℃下反应16 h。在16000 g下离心40 min回收酶解得到的肽段,用30 μL 50 mmol/L碳酸氢铵溶液清洗滤膜两次,离心后与肽段合并。冻干后用0.1%(v/v) FA水溶液复溶后冻存于-80 ℃,待UHPLC-MS/MS分析,以上各组实验均重复3次进行。 将Hela细胞分别重悬于200 μL的40 g/L SDS、6 mol/L盐酸胍、8 mol/L尿素或1×PBS缓冲液中,将样品放置于冰上。利用细胞超声破碎仪对细胞超声破碎。超声条件如下:5 s开,5 s关,反复操作,总时间1 min,能量40 W。在利用不同细胞裂解液破碎细胞时,用甲醇和去离子水交叉清洗超声探头3次,避免样品间的交叉污染。超声后的溶液在16000 g下离心20 min,去除沉降的不溶物,收集上清液即得到从细胞中提取的蛋白质溶液。 在蛋白质溶液中加入终浓度为10 mmol/L的TCEP,混匀后在95 ℃反应10 min使蛋白质变性还原。放至室温后,加入终浓度为20 mmol/L的IAA,在室温避光反应30 min。采取基于滤膜辅助的蛋白质组样品预处理方法完成蛋白质组样品处理 。具体如下:将样品转移到截留分子质量为10000 Da的滤膜上,在16000 g下离心40 min。用200 μL 8 mol/L尿素和50 mmol/L碳酸氢铵溶液分别清洗滤膜3次。按照酶和蛋白质的质量比1∶20加入胰蛋白酶,在37 ℃下反应16 h。在16000 g下离心40 min回收酶解得到的肽段,用30 μL 50 mmol/L碳酸氢铵溶液清洗滤膜两次,离心后与肽段合并。冻干后用0.1%(v/v) FA水溶液复溶后冻存于-80 ℃,待UHPLC-MS/MS分析,以上各组实验均重复3次进行。 甲状腺肿和甲状腺乳头状癌(T1N1M0)患者FFPE样品来源于大连医科大学附属第二医院,样品经过伦理委员会验证(大医二院伦快审2020第057号)。将切片样品放于50 mL烧杯中,加入二甲苯溶液浸没组织,共浸泡15 min。再分别用无水乙醇、70%(v/v)乙醇水溶液、56%(v/v)乙醇水溶液各浸泡组织两次,每次10 min,以完成石蜡包埋的组织切片样品的脱蜡水合。之后,用60 μL的40 g/L SDS提取蛋白质,并完成蛋白质组样品预处理。 流动相A相为含0.1%(v/v)FA的2%(v/v)乙腈水溶液,B相为含0.1%(v/v)FA的80%(v/v)乙腈水溶液。将肽段上样于Acclaim PePMap RSLC C18色谱柱(250 mm×75 μm, 2 μm)进行分离,流速为300 nL/min。 1.4.1 液相色谱流动相梯度优化 方法一:总梯度洗脱时间为50 min。0~30 min, 5%B~30%B; 30~35 min, 30%B~50%B; 35~38 min, 50%B~95%B; 38~43 min, 95%B; 43~44 min, 95%B~5%B; 44~50 min, 5%B。 方法二:总梯度洗脱时间为85 min。0~60 min, 5%B~30%B; 60~70 min, 30%B~50%B; 70~71 min, 50%B~95%B; 71~75 min, 95%B; 75~76 min, 95%B~5%B; 76~85 min, 5%B。 方法三:总梯度洗脱时间为130 min。0~5 min, 5%B~8%B; 5~105 min, 8%B~30%B; 105~120 min, 30%B~50%B; 120~121 min, 50%B~90%B; 121~125 min, 90%B; 125~126 min, 90%B~5%B; 126~130 min, 5%B。 1.4.2 质谱采集条件 在正离子数据依赖(DDA)模式下采集数据。一级质谱采集质量范围为350~2000 Da,分辨率为60000,自动增益控制(AGC)值为3×10 6 ,最大离子注入时间设置为30 ms。母离子通过27%的碰撞能量进行碎裂,采集电荷为+2~+7价态肽段。二级质谱扫描在15000分辨率下进行,AGC设置为5×10 4 ,分离窗口为 m/z 1.6,动态排除时间设置为45 s。 方法一:总梯度洗脱时间为50 min。0~30 min, 5%B~30%B; 30~35 min, 30%B~50%B; 35~38 min, 50%B~95%B; 38~43 min, 95%B; 43~44 min, 95%B~5%B; 44~50 min, 5%B。 方法二:总梯度洗脱时间为85 min。0~60 min, 5%B~30%B; 60~70 min, 30%B~50%B; 70~71 min, 50%B~95%B; 71~75 min, 95%B; 75~76 min, 95%B~5%B; 76~85 min, 5%B。 方法三:总梯度洗脱时间为130 min。0~5 min, 5%B~8%B; 5~105 min, 8%B~30%B; 105~120 min, 30%B~50%B; 120~121 min, 50%B~90%B; 121~125 min, 90%B; 125~126 min, 90%B~5%B; 126~130 min, 5%B。 在正离子数据依赖(DDA)模式下采集数据。一级质谱采集质量范围为350~2000 Da,分辨率为60000,自动增益控制(AGC)值为3×10 6 ,最大离子注入时间设置为30 ms。母离子通过27%的碰撞能量进行碎裂,采集电荷为+2~+7价态肽段。二级质谱扫描在15000分辨率下进行,AGC设置为5×10 4 ,分离窗口为 m/z 1.6,动态排除时间设置为45 s。 利用Proteome Discoverer(PD)软件(版本2.4)分析质谱采集的数据。采用2021年2月2日在uniprot库中下载的人源数据库与数据进行匹配。一级母离子和二级碎裂离子的质量兼容度分别为1.0×10 -5 (10 ppm)和2.0×10 -5 (20 ppm)。设置胰蛋白酶为特异性酶切,最多允许2个漏切位点。将蛋白质N-末端的乙酰化(+42.0150 Da)和甲硫氨酸的氧化(+15.9949 Da)设置为可变修饰,半胱氨酸上的烷基化(+71.0371 Da)设置为固定修饰。控制肽段和蛋白质鉴定假阳性率(FDR)≤ 1%。利用MaxQuant软件(版本1.6.5.0)对质谱采集的数据进行无标记定量分析。搜库条件与PD软件设置保持一致。 2.1 蛋白质提取方法的优化 以HeLa细胞为样品,考察不同蛋白质提取试剂对蛋白质提取效率的影响。分别采用40 g/L SDS、6 mol/L盐酸胍、8 mol/L尿素、1×PBS缓冲液提取相同量HeLa细胞中的蛋白质,并采取基于滤膜辅助的蛋白质组样品预处理方法完成蛋白质组样品处理 ,利用质谱仪对酶解的肽段重复采集3次,采集数据导入至PD软件中合并检索,得到不同蛋白质提取试剂处理下的鉴定结果。如 所示,40 g/L SDS溶液提取的蛋白质鉴定数量最高,1×PBS缓冲液提取的蛋白质鉴定数量最低。两者相比,SDS溶液的蛋白质鉴定数量高出32.3%。此外,40 g/L SDS溶液提取条件下鉴定的蛋白质能覆盖到其他3种提取试剂的64.4%~82.2%。与其他3种提取试剂相比,40 g/L SDS溶液能单独鉴定775个蛋白质,经过基因本体论(GO)细胞组成分析发现,高达34.1%(264个)蛋白质定位为与膜相关的蛋白质。以上结果归因于SDS是一种强效且具有两亲性的表面活性剂,对蛋白质有更强的溶解能力,有利于鉴定到更多的蛋白质。因此,为了提高蛋白质的鉴定覆盖度,后续采用40 g/L SDS溶液完成蛋白质样品提取。 2.2 UHPLC-MS/MS采集条件的优化 通过梯度洗脱方法,液相色谱可以有效地将复杂肽段混合物中的不同组分按照其保留性质的差异进行分离,进而被质谱检测。色谱分离梯度直接决定了多肽和蛋白质的鉴定覆盖度。因此,对液相色谱的梯度洗脱条件进行优化以评价相同样品在不同分离梯度下的色谱分离情况,并考察不同分离梯度对蛋白质鉴定覆盖度的影响。 用同一根色谱柱在50、85和130 min的分离梯度下分别分析Hela细胞酶解肽段。如 所示,随着分离梯度时间的延长,肽段样品的出峰时间明显延长。此外,随机提取质量电荷比值( m/z )为855.46747的离子,如 所示,在3种不同的分离梯度下,此离子的出峰时间分别为38.63、69.81和114.63 min,即同一组分在不同液相色谱条件下出峰时间不同。 进一步对不同液相色谱分离梯度质谱鉴定的蛋白质进行分析。如 所示,在总分离时间为50、85和130 min的条件下,基于PD软件对每个条件下重复采集3针的数据合并检索。结果显示,蛋白质鉴定的数量分别为2767、3267、5029个,肽段的鉴定数量为7993、10034、24692条。即色谱分离梯度的延长实现了肽段更好的分离,利于蛋白质和肽段的鉴定。如 所示,130 min的分离梯度下鉴定的蛋白质可以覆盖50、85 min鉴定蛋白质的71.9%~72.5%。 考虑到数据采集的重复性是准确定性定量分析的前提。因此,对数据采集的重复性进行评价。如 所示,在85 min的分离梯度下重复3次分析Hela细胞酶解肽段样品,同一离子( m/z 855.46747)在3针重复采集下的色谱保留时间有较好的一致性,保留时间的相对标准偏差只有0.04%。至少63.5%的蛋白质被3针重复鉴定,超过73.6%的蛋白质在至少两针中鉴定到。比较3次质谱重复的定量比值,皮尔森相关系数为0.95~0.99,说明此方法具有较好的分析重复性。 2.3 甲状腺肿和甲状腺乳头状癌患者组织切片样本定量分析 FFPE样品在临床上具有重要的研究价值。甲状腺乳头状癌是一种在临床中常见的疾病。若不及时的干预和治疗,有进一步发展成晚期癌症和转移扩散的风险。因此,对甲状腺乳头状癌组织切片样品蛋白质组进行研究以期寻找疾病相关的潜在生物标志物,可助力疾病的及时发现和治疗,提高患者的治愈率和生存质量。 进一步对3例甲状腺肿和3例甲状腺乳头状癌的FFPE样品的蛋白质组进行分析。结果发现,两种组织切片样品中可以共同定量到432个蛋白质,任意两组样品之间的无标记定量强度的皮尔森相关系数超过0.85,证明此方法具有良好的定量分析重复性。为保证分析的准确性和可信度,通过 t 检验,我们将 p <0.05且甲状腺乳头状癌与甲状腺肿样品相比发生2倍变化的蛋白质作为差异蛋白。最终,与甲状腺肿相比,在甲状腺乳头状癌组织切片中定量到33个差异蛋白质,其中11个为上调蛋白质,22个为下调蛋白质,利用差异蛋白质成功实现了疾病的分子分型研究,证明利用组织切片蛋白质组可以成功实现甲状腺乳头状癌和甲状腺肿样品的区分。 对不同样品间的差异蛋白质涉及的生物学功能及KEGG通路进行分析。如 所示,与甲状腺肿样品相比,甲状腺乳头状癌患者明显变化的蛋白质主要参与了细胞外基质受体相互作用(显著性水平 p <3.4×10 -4 )、对活性氧响应( p <2.5×10 -5 )、抗氧化活性( p <6.5×10 -4 )、氧化应激反应( p <1.2×10 -3 )等过程,而文献报道这些过程与甲状腺疾病患者的代谢过程异常密切相关 ,证明此方法可实现高可信度的甲状腺组织切片样品的蛋白质组分析。 对差异蛋白质进一步分析,发现高达57.6%的蛋白质(EHD1、NID2、AGRN、TPO、VCP、LAMC1、RAP1B、RPN2、PDIA3、PRDX5、PRDX6、GSTP1、APOA1、LMNA、ANXA1、APOE、FN1、HP、S100A6)被报道和甲状腺癌及甲状腺相关疾病密切相关 ,表明我们的结果与文献报道有较好的一致性。在差异蛋白质中,EH结构域蛋白(Q9H4M9和EHD1)在甲状腺乳头状癌患者中有3.37倍的下调( p =0.0071),文献报道此蛋白质在甲状腺癌中扮演着重要的角色,与肿瘤的大小、淋巴结转移和癌症表面生长因子受体(EGFR)的表达紧密相关 。S100钙结合蛋白A6(P06703和S100A6)在甲状腺乳头状癌患者中有13.6倍的上调变化( p =0.043),此蛋白质被报道能够促进癌细胞增值 ,能被用作甲状腺癌治疗的潜在作用靶点。此外,尽管其他的差异表达蛋白质(FREM2、HSPG2、GANAB、CKB、LAMA5、HNRNPK、CNDP2、PAFAH1B2、RAB7A、RO60、MAOA、RPL13A、RPS3、WASF2)在甲状腺乳头状癌中尚未被报道,但已有研究表明它们与卵巢癌、结肠癌、胰腺癌、前列腺癌等疾病密切相关 ,这表明它们可能在癌症的发展过程中扮演着关键的指示角色。因此,这些蛋白质作为甲状腺乳头状癌潜在生物标志物的可能性值得进一步研究和探索。可见,基于组织切片蛋白质组的分析在甲状腺乳头状癌潜在生物标志物的发现上具有巨大的应用潜力。 以HeLa细胞为样品,考察不同蛋白质提取试剂对蛋白质提取效率的影响。分别采用40 g/L SDS、6 mol/L盐酸胍、8 mol/L尿素、1×PBS缓冲液提取相同量HeLa细胞中的蛋白质,并采取基于滤膜辅助的蛋白质组样品预处理方法完成蛋白质组样品处理 ,利用质谱仪对酶解的肽段重复采集3次,采集数据导入至PD软件中合并检索,得到不同蛋白质提取试剂处理下的鉴定结果。如 所示,40 g/L SDS溶液提取的蛋白质鉴定数量最高,1×PBS缓冲液提取的蛋白质鉴定数量最低。两者相比,SDS溶液的蛋白质鉴定数量高出32.3%。此外,40 g/L SDS溶液提取条件下鉴定的蛋白质能覆盖到其他3种提取试剂的64.4%~82.2%。与其他3种提取试剂相比,40 g/L SDS溶液能单独鉴定775个蛋白质,经过基因本体论(GO)细胞组成分析发现,高达34.1%(264个)蛋白质定位为与膜相关的蛋白质。以上结果归因于SDS是一种强效且具有两亲性的表面活性剂,对蛋白质有更强的溶解能力,有利于鉴定到更多的蛋白质。因此,为了提高蛋白质的鉴定覆盖度,后续采用40 g/L SDS溶液完成蛋白质样品提取。 通过梯度洗脱方法,液相色谱可以有效地将复杂肽段混合物中的不同组分按照其保留性质的差异进行分离,进而被质谱检测。色谱分离梯度直接决定了多肽和蛋白质的鉴定覆盖度。因此,对液相色谱的梯度洗脱条件进行优化以评价相同样品在不同分离梯度下的色谱分离情况,并考察不同分离梯度对蛋白质鉴定覆盖度的影响。 用同一根色谱柱在50、85和130 min的分离梯度下分别分析Hela细胞酶解肽段。如 所示,随着分离梯度时间的延长,肽段样品的出峰时间明显延长。此外,随机提取质量电荷比值( m/z )为855.46747的离子,如 所示,在3种不同的分离梯度下,此离子的出峰时间分别为38.63、69.81和114.63 min,即同一组分在不同液相色谱条件下出峰时间不同。 进一步对不同液相色谱分离梯度质谱鉴定的蛋白质进行分析。如 所示,在总分离时间为50、85和130 min的条件下,基于PD软件对每个条件下重复采集3针的数据合并检索。结果显示,蛋白质鉴定的数量分别为2767、3267、5029个,肽段的鉴定数量为7993、10034、24692条。即色谱分离梯度的延长实现了肽段更好的分离,利于蛋白质和肽段的鉴定。如 所示,130 min的分离梯度下鉴定的蛋白质可以覆盖50、85 min鉴定蛋白质的71.9%~72.5%。 考虑到数据采集的重复性是准确定性定量分析的前提。因此,对数据采集的重复性进行评价。如 所示,在85 min的分离梯度下重复3次分析Hela细胞酶解肽段样品,同一离子( m/z 855.46747)在3针重复采集下的色谱保留时间有较好的一致性,保留时间的相对标准偏差只有0.04%。至少63.5%的蛋白质被3针重复鉴定,超过73.6%的蛋白质在至少两针中鉴定到。比较3次质谱重复的定量比值,皮尔森相关系数为0.95~0.99,说明此方法具有较好的分析重复性。 FFPE样品在临床上具有重要的研究价值。甲状腺乳头状癌是一种在临床中常见的疾病。若不及时的干预和治疗,有进一步发展成晚期癌症和转移扩散的风险。因此,对甲状腺乳头状癌组织切片样品蛋白质组进行研究以期寻找疾病相关的潜在生物标志物,可助力疾病的及时发现和治疗,提高患者的治愈率和生存质量。 进一步对3例甲状腺肿和3例甲状腺乳头状癌的FFPE样品的蛋白质组进行分析。结果发现,两种组织切片样品中可以共同定量到432个蛋白质,任意两组样品之间的无标记定量强度的皮尔森相关系数超过0.85,证明此方法具有良好的定量分析重复性。为保证分析的准确性和可信度,通过 t 检验,我们将 p <0.05且甲状腺乳头状癌与甲状腺肿样品相比发生2倍变化的蛋白质作为差异蛋白。最终,与甲状腺肿相比,在甲状腺乳头状癌组织切片中定量到33个差异蛋白质,其中11个为上调蛋白质,22个为下调蛋白质,利用差异蛋白质成功实现了疾病的分子分型研究,证明利用组织切片蛋白质组可以成功实现甲状腺乳头状癌和甲状腺肿样品的区分。 对不同样品间的差异蛋白质涉及的生物学功能及KEGG通路进行分析。如 所示,与甲状腺肿样品相比,甲状腺乳头状癌患者明显变化的蛋白质主要参与了细胞外基质受体相互作用(显著性水平 p <3.4×10 -4 )、对活性氧响应( p <2.5×10 -5 )、抗氧化活性( p <6.5×10 -4 )、氧化应激反应( p <1.2×10 -3 )等过程,而文献报道这些过程与甲状腺疾病患者的代谢过程异常密切相关 ,证明此方法可实现高可信度的甲状腺组织切片样品的蛋白质组分析。 对差异蛋白质进一步分析,发现高达57.6%的蛋白质(EHD1、NID2、AGRN、TPO、VCP、LAMC1、RAP1B、RPN2、PDIA3、PRDX5、PRDX6、GSTP1、APOA1、LMNA、ANXA1、APOE、FN1、HP、S100A6)被报道和甲状腺癌及甲状腺相关疾病密切相关 ,表明我们的结果与文献报道有较好的一致性。在差异蛋白质中,EH结构域蛋白(Q9H4M9和EHD1)在甲状腺乳头状癌患者中有3.37倍的下调( p =0.0071),文献报道此蛋白质在甲状腺癌中扮演着重要的角色,与肿瘤的大小、淋巴结转移和癌症表面生长因子受体(EGFR)的表达紧密相关 。S100钙结合蛋白A6(P06703和S100A6)在甲状腺乳头状癌患者中有13.6倍的上调变化( p =0.043),此蛋白质被报道能够促进癌细胞增值 ,能被用作甲状腺癌治疗的潜在作用靶点。此外,尽管其他的差异表达蛋白质(FREM2、HSPG2、GANAB、CKB、LAMA5、HNRNPK、CNDP2、PAFAH1B2、RAB7A、RO60、MAOA、RPL13A、RPS3、WASF2)在甲状腺乳头状癌中尚未被报道,但已有研究表明它们与卵巢癌、结肠癌、胰腺癌、前列腺癌等疾病密切相关 ,这表明它们可能在癌症的发展过程中扮演着关键的指示角色。因此,这些蛋白质作为甲状腺乳头状癌潜在生物标志物的可能性值得进一步研究和探索。可见,基于组织切片蛋白质组的分析在甲状腺乳头状癌潜在生物标志物的发现上具有巨大的应用潜力。 3.1 实验的组织实施 开放实验课程以基于UHPLC-MS/MS的甲状腺癌组织切片蛋白质组定量分析为题,涉及液相色谱-高分辨质谱仪操作培训、蛋白质组样品制备、液相色谱-质谱分析方法优化等内容。该课程每年开展两次,每次择优遴选学生2~4人,与教师合作共同完成实验项目。本课程采取理论与实验相结合的教学方式,课程安排共32学时,其中理论课程为4学时,实验课程为28学时。理论课程部分包括:液相色谱-质谱联用仪的发展历史(1学时)、仪器结构(2学时)、工作原理(1学时)。实验部分包括:液相色谱-质谱联用仪的操作(6学时)、仪器相关参数优化(4学时)、蛋白质组样品制备方法开发(12学时)、蛋白质组数据的检索与分析(6学时)等。 在课程设计时,教师对学生的学科背景和学习基础进行详细分析。在理论课程的讲授中,教师采用动画、视频等多媒体工具,将复杂的技术原理以直观、生动的方式呈现出来。此外,通过融入历史脉络、理论推导以及实际应用案例,深入剖析技术环节背后的科学逻辑,助力学生构建全面而系统的知识框架。实验课程开始前,教师准备好实验材料和设备,并对学生进行实验安全培训。学生仔细阅读实验指导书,了解实验原理、目的和注意事项。实验过程中,鼓励学生积极思考并自主设计实验,规范记录实验数据、现象和结果。教师在场指导,及时解答问题。实验结束后,学生需整理实验数据并进行成果展示,介绍实验过程、结果、心得体会,思考课程涉及领域的未来研究趋势和跨学科融合的可能性。教师对学生的实验报告和成果展示进行评价,给出成绩和反馈意见。同时,征求学生对课程设置、教学方法和实验指导等方面的意见和建议,以不断优化和完善课程体系。 3.2 教学反思 在教学过程中,教师遵循“以学生为主体”的原则,注重与学生的交流互动。学生们在实验过程中能够主动思考、独立解决问题,展现出较强的团队协作精神。通过开放实验课程的学习,学生的创新思维、动手能力和科学素养得到了显著提升。在开放实验课程的教学过程中,仍有一些值得思考和注意的事项。 首先,理论课程内容的设计应该与分析化学和仪器分析理论课程部分的学习紧密连接并适度延伸,以巩固学生基础知识,加深对理论内容的认识。其次,本实验课程中,样品制备时间长且需要连续操作,课程开始前应该与组内每名学生协调好时间再安排实验。同时,在实验间隙组织学生进行文献阅读和实验研讨等活动,提高学生学习的主动性。实验过程中,每位学生的学科背景及基础不同,作为教师应该充分考虑学生的差异性和需求,小组内合理分工协调,以保证实验的顺利进行。在液相色谱-质谱联用仪的操作培训时,学生不敢动手操作仪器,理论知识与实际应用脱节。教师在教学过程中,让学生多思考,勤动手,敢于直面问题并想解决方案。最后,在学习成果展示与交流环节,可以进一步加强互动和合作的氛围。考虑引入同行评议或评委评审的方式,以提供更多的反馈,启发学生进行思考。 开放实验课程以基于UHPLC-MS/MS的甲状腺癌组织切片蛋白质组定量分析为题,涉及液相色谱-高分辨质谱仪操作培训、蛋白质组样品制备、液相色谱-质谱分析方法优化等内容。该课程每年开展两次,每次择优遴选学生2~4人,与教师合作共同完成实验项目。本课程采取理论与实验相结合的教学方式,课程安排共32学时,其中理论课程为4学时,实验课程为28学时。理论课程部分包括:液相色谱-质谱联用仪的发展历史(1学时)、仪器结构(2学时)、工作原理(1学时)。实验部分包括:液相色谱-质谱联用仪的操作(6学时)、仪器相关参数优化(4学时)、蛋白质组样品制备方法开发(12学时)、蛋白质组数据的检索与分析(6学时)等。 在课程设计时,教师对学生的学科背景和学习基础进行详细分析。在理论课程的讲授中,教师采用动画、视频等多媒体工具,将复杂的技术原理以直观、生动的方式呈现出来。此外,通过融入历史脉络、理论推导以及实际应用案例,深入剖析技术环节背后的科学逻辑,助力学生构建全面而系统的知识框架。实验课程开始前,教师准备好实验材料和设备,并对学生进行实验安全培训。学生仔细阅读实验指导书,了解实验原理、目的和注意事项。实验过程中,鼓励学生积极思考并自主设计实验,规范记录实验数据、现象和结果。教师在场指导,及时解答问题。实验结束后,学生需整理实验数据并进行成果展示,介绍实验过程、结果、心得体会,思考课程涉及领域的未来研究趋势和跨学科融合的可能性。教师对学生的实验报告和成果展示进行评价,给出成绩和反馈意见。同时,征求学生对课程设置、教学方法和实验指导等方面的意见和建议,以不断优化和完善课程体系。 在教学过程中,教师遵循“以学生为主体”的原则,注重与学生的交流互动。学生们在实验过程中能够主动思考、独立解决问题,展现出较强的团队协作精神。通过开放实验课程的学习,学生的创新思维、动手能力和科学素养得到了显著提升。在开放实验课程的教学过程中,仍有一些值得思考和注意的事项。 首先,理论课程内容的设计应该与分析化学和仪器分析理论课程部分的学习紧密连接并适度延伸,以巩固学生基础知识,加深对理论内容的认识。其次,本实验课程中,样品制备时间长且需要连续操作,课程开始前应该与组内每名学生协调好时间再安排实验。同时,在实验间隙组织学生进行文献阅读和实验研讨等活动,提高学生学习的主动性。实验过程中,每位学生的学科背景及基础不同,作为教师应该充分考虑学生的差异性和需求,小组内合理分工协调,以保证实验的顺利进行。在液相色谱-质谱联用仪的操作培训时,学生不敢动手操作仪器,理论知识与实际应用脱节。教师在教学过程中,让学生多思考,勤动手,敢于直面问题并想解决方案。最后,在学习成果展示与交流环节,可以进一步加强互动和合作的氛围。考虑引入同行评议或评委评审的方式,以提供更多的反馈,启发学生进行思考。 本开放性实验课程充分利用校级公共实验平台优质资源条件,让学生参与并完成实验项目。该课程内容丰富,具备较强的综合性与实用性。课程涵盖了系统的理论教学、大型仪器设备操作培训和前沿的开放性实验项目,充分体现了理论知识与实践技能并重的教育理念。该课程引入液相色谱-质谱仪器操作教学,使学生深入理解色谱质谱的工作原理、掌握大型仪器操作流程和维护要点,提高学生的动手能力及解决问题的能力。此外,本课程设计了基于UHPLC-MS/MS的甲状腺癌组织切片蛋白质组定量分析的开放性实验项目,涉及从蛋白质组样品制备到数据分析全部流程,有利于培养学生的实践能力、创新精神和团队协作能力,提升学生的科研素养,为其未来的科研工作及培养能够适应新时代要求的高水平创新型人才奠定基础。随着教育模式的不断演进,我们期待本课程能够为未来的教育改革提供宝贵的经验和启示。
Older predicted age using AI EKG‐based age prediction is associated with higher Alzheimer’s disease neuropathologic change
be504b7e-7e98-4099-9996-934bef89fed5
11714791
Forensic Medicine[mh]
“Fontan Conduit Stent-Angioplasty and Progression of Fontan-Associated Liver Disease”
701e4f28-f034-4b9c-b728-a5474a3738c2
11787146
Surgical Procedures, Operative[mh]
Since being introduced in 1971, the Fontan palliation has served as the final surgical step in the single ventricle pathway for many different congenital cardiac pathologies 1 . As the surgical techniques have continued to evolve in addition to advancements in medical management, patients with Fontan physiology have continued to live longer, most well into adulthood. There are, however, many complications associated with the Fontan physiology including arrhythmia, thromboembolism, ventricular failure, kidney disease, protein-losing enteropathy, and liver fibrosis or Fontan-associated liver disease (FALD) . FALD remains clinically important especially when evaluating for transplant eligibility as it may necessitate combined heart-liver transplant versus heart transplant alone. While there is still much to explore regarding the pathophysiology, it is thought that a large contributor to FALD is persistently elevated central venous pressures, loss of venous pulsatility, and marginal cardiac output which can lead to cardiac cirrhosis . The development of FALD is insidious and is difficult to diagnose as the majority of patients will be asymptomatic without significant physical or laboratory abnormalities . A complication that may exacerbate FALD is Fontan conduit stenosis, which may compromise the hemodynamics of the conduit and lead to more severe central venous congestion, particularly proximal to the obstruction. Fontan conduit stenosis occurs at a considerable rate with a mean decrease of cross-sectional area (CSA) of 14% at 3 years for extracardiac conduits . Significant stenosis can be defined as a reduction of CSA of 25% or a measurable gradient across the conduit, which can occur in approximately 22% of patients . This may even be an underestimate at present, as patients followed longitudinally in the current era have more standardized imaging assessments of the Fontan conduit compared to in the past. Fontan stent angioplasty to relieve conduit stenosis has already been shown to improve various comorbidities of Fontan circulation, including NYHA class, exercise capacity, severity of protein-losing enteropathy (PLE), severity of ascites, and pulmonary vascular resistance (PVR) . Several studies have also demonstrated that higher Fontan and central venous pressures are often associated with more advanced histologic liver fibrosis and liver stiffness . The aim of this study was to evaluate whether non-invasive markers of FALD change following treatment of Fontan conduit stenosis via stent angioplasty. We hypothesized that stent angioplasty would reduce hepatic congestion, and thus hinder or even improve the progression of FALD. This retrospective, single-center study included all patients with Fontan physiology who had undergone Fontan conduit stent angioplasty at the Medical University of South Carolina. Our institutional catheterization record database was searched to identify the subject group that underwent this intervention between January 2017 and August 2022. Subjects’ medical records were reviewed for demographics, primary cardiac lesion, age at Fontan operation, type of Fontan repair, and extracardiac conduit (ECC) size if applicable. Chart review was performed to obtain the most proximal pre-intervention FALD markers, defined as serum hepatic biomarkers, Model for End-stage Liver Disease excluding INR (MELD-XI) scores, liver elastography stiffness (kPa), and liver biopsies if available. Pre-intervention echocardiograms were reviewed for assessments of ventricular function and atrioventricular valve regurgitation (AVVR). The cardiac catheterization record was reviewed to obtain baseline hemodynamics, cross-sectional area (CSA) of the Fontan conduit by angiography before and after stent deployment, type of stent utilized, and post-stent deployment hemodynamics. At a minimum of three months post-procedure, the same outcome markers were recorded at most proximal point to time of chart review (outside of hospitalization for decompensated heart failure and/or immediately prior to orthotopic heart transplant). Any available repeat cardiac catheterization hemodynamics and stented-conduit CSA measured by catheterization angiography or computerized tomography (CT) scan were also reviewed. The project was approved by the MUSC Institutional Review Board. The distribution of data was tested using the Shapiro–Wilk test. Data are reported as mean ± standard deviation for normally distributed data or median (interquartile range) for non-normally distributed data. Differences between pre- and post-stent variables were tested using Wilcoxon Signed-Rank test or the McNemar test as appropriate. A p value less than 0.05 was considered statistically significant. All statistics were performed using IBM SPSS Statistics software v. 27 (manufactured in Armonk, NY). A total of 33 Fontan patients underwent conduit stent angioplasty (52% males) within the study period, with demographic data summarized in Table . Indications for cardiac catheterization were surveillance hemodynamic assessment for the majority of subjects, and several others referred due to a clinical change. Patients were a median age of 3.7 years (IQR 3.0, 5.1) at the time of their Fontan operation. The majority of the cohort had an extracardiac conduit (ECC) ( n = 23, 70%) with the remaining patients having a lateral tunnel. Median ECC size at surgery was 18 mm (IQR 18, 20). Patients’ pre-stent angioplasty median MELD-XI score was 11.5 (IQR 9.0, 12.0) ( n = 32). Baseline liver elastography data were available for 23 of 33 patients prior to undergoing stent angioplasty, with median liver stiffness velocity of 2.0 m/s (IQR 1.7, 2.3) correlating to a liver stiffness of 12.0 kPa (IQR 9.0, 15.2). Three subjects had prior pre-stent liver biopsies ranging from grade 1 to 4 fibrosis on pathology. Most proximal pre-stent angioplasty echocardiogram showed the majority of patients had normal to mildly depressed ventricular function ( n = 25, 76%) and normal or mild AVVR (n = 26, 79%). Cardiac Catheterization Data During Stent Angioplasty The average age at conduit stent angioplasty was 23.8 ± 8.0 years, at an average of 19.3 ± 7 years from Fontan operation. Measurements of the inferior vena cava (IVC), Fontan conduit, and superior vena cava (SVC) at cardiac catheterization were performed in both posteroanterior (PA) and lateral projections on images with contrast (Fig. ). After stenting, the same diameters were measured again with calculations of pre- and post- angioplasty conduit CSA. Pre-stent angiography showed a median minimal Fontan conduit posteroanterior diameter of 12.6 mm (IQR 9.9, 14.0), minimal lateral diameter of 14.0 mm (IQR 10.5, 16.2), and the minimal Fontan CSA of 132 mm 2 (IQR 91, 173). The median SVC, Fontan, IVC, and mean pulmonary artery pressures were 15 mmHg (IQR 12, 18) and pulmonary capillary wedge pressures 10 mmHg (IQR 8, 11); median transpulmonary gradient was 5.0 mmHg (3.0, 6.5). The median cardiac index was 2.54 L/min/m 2 (IQR 2.31, 3.25), and the mean pulmonary vascular resistance index was 2.06 WUm 2 (IQR 1.61, 3.40). Subject group hemodynamic data and procedural measurements are summarized in Table . A variety of stent sizes and types were utilized for conduit stent angioplasty with a range of maximum balloon expansion size from 10 to 28 mm (Supplemental Appendix ). Conduit stenting was successful in all subjects without complications. Following stent angioplasty, the dimensions of the Fontan conduit increased with a median PA diameter of 20.0 mm (18.0, 21.0), lateral diameter of 20.0 mm (18.0, 22.0), and a CSA of 314 mm 2 (255, 363). The immediate post-stent hemodynamics were minimally changed, with a median SVC pressure of 16.0 mmHg (12.8, 20.0), Fontan pressure of 15 mmHg (12.5, 18.0), and IVC pressure of 16 mmHg (13, 20). Post-stent cardiac output indices were not uniformly re-calculated. Of note, 13 subjects underwent concurrent interventions for other abnormal findings noted at time of catheterization for Fontan conduit stent angioplasty: 8 collateral closure, 3 other stent angioplasty (coarctation, left pulmonary artery, and iliac vein), and 2 fenestration interventions (closure for 1 and stent balloon angioplasty for 1). Mid-Term Outcomes post-Stent Angioplasty Twenty-two subjects (67% of cohort) had paired pre- and post-angioplasty labs to compare median MELD-XI scores, with average reassessment at 19 ± 15.5 months post-angioplasty. Subjects’ baseline MELD-XI score was 10.5 (9.0, 12.0) and slightly increased on reassessment to 11.5 (9.0, 13.0) ( p = 0.053). The median post-angioplasty total bilirubin significantly increased from baseline 1.1 (0.7, 1.5) to 1.4 (0.9, 1.8) ( p = 0.04), though there was no significant change in pre- and post- serum creatinine, platelet count, INR, serum sodium, total protein, albumin, or transaminases (Table ). Fifteen subjects (45% of cohort) had paired pre- and post-angioplasty ultrasound elastography to reassess median liver stiffness at an average of 12.1 ± 8.9 months post-angioplasty. Baseline liver stiffness of 12.0 (9.4, 15.2) kPa showed a statistically insignificant downtrend to 10.8 kPa (9.0, 12.8) post-stent ( p = 0.13). Only eight subjects had spleen size recorded on paired pre- and post- ultrasound reports without a significant change ( p = 0.40). No post-angioplasty liver biopsies available. There were no hospitalizations post-angioplasty for acute liver injury or decompensated liver failure in chart review. Among 16 subjects with matching pre- and post-angioplasty BNP’s, the median baseline value 33.5 (15.2, 78.6) significantly increased to 41.0 (0, 147.8) ( p = 0.02). There were no significant changes in echocardiographic ventricular function ( p = 1.00, n = 21 subjects) or hemodynamics via repeat catheterization ( n = 9 subjects, Wilcoxon signed rank test comparisons of median pre- and post-angioplasty pressures all > 0.05) (Table ). Seven subjects underwent repeat CTA heart on average 1.6 years post-intervention to reassess cardiovascular anatomy and stent patency without any complications identified with the stent ( Fig. ). Within the retrospective review period, one subject died of decompensated heart failure 6.5 months post-angioplasty, and two underwent orthotopic heart transplant between 9 and 11 months post-angioplasty. The average age at conduit stent angioplasty was 23.8 ± 8.0 years, at an average of 19.3 ± 7 years from Fontan operation. Measurements of the inferior vena cava (IVC), Fontan conduit, and superior vena cava (SVC) at cardiac catheterization were performed in both posteroanterior (PA) and lateral projections on images with contrast (Fig. ). After stenting, the same diameters were measured again with calculations of pre- and post- angioplasty conduit CSA. Pre-stent angiography showed a median minimal Fontan conduit posteroanterior diameter of 12.6 mm (IQR 9.9, 14.0), minimal lateral diameter of 14.0 mm (IQR 10.5, 16.2), and the minimal Fontan CSA of 132 mm 2 (IQR 91, 173). The median SVC, Fontan, IVC, and mean pulmonary artery pressures were 15 mmHg (IQR 12, 18) and pulmonary capillary wedge pressures 10 mmHg (IQR 8, 11); median transpulmonary gradient was 5.0 mmHg (3.0, 6.5). The median cardiac index was 2.54 L/min/m 2 (IQR 2.31, 3.25), and the mean pulmonary vascular resistance index was 2.06 WUm 2 (IQR 1.61, 3.40). Subject group hemodynamic data and procedural measurements are summarized in Table . A variety of stent sizes and types were utilized for conduit stent angioplasty with a range of maximum balloon expansion size from 10 to 28 mm (Supplemental Appendix ). Conduit stenting was successful in all subjects without complications. Following stent angioplasty, the dimensions of the Fontan conduit increased with a median PA diameter of 20.0 mm (18.0, 21.0), lateral diameter of 20.0 mm (18.0, 22.0), and a CSA of 314 mm 2 (255, 363). The immediate post-stent hemodynamics were minimally changed, with a median SVC pressure of 16.0 mmHg (12.8, 20.0), Fontan pressure of 15 mmHg (12.5, 18.0), and IVC pressure of 16 mmHg (13, 20). Post-stent cardiac output indices were not uniformly re-calculated. Of note, 13 subjects underwent concurrent interventions for other abnormal findings noted at time of catheterization for Fontan conduit stent angioplasty: 8 collateral closure, 3 other stent angioplasty (coarctation, left pulmonary artery, and iliac vein), and 2 fenestration interventions (closure for 1 and stent balloon angioplasty for 1). Twenty-two subjects (67% of cohort) had paired pre- and post-angioplasty labs to compare median MELD-XI scores, with average reassessment at 19 ± 15.5 months post-angioplasty. Subjects’ baseline MELD-XI score was 10.5 (9.0, 12.0) and slightly increased on reassessment to 11.5 (9.0, 13.0) ( p = 0.053). The median post-angioplasty total bilirubin significantly increased from baseline 1.1 (0.7, 1.5) to 1.4 (0.9, 1.8) ( p = 0.04), though there was no significant change in pre- and post- serum creatinine, platelet count, INR, serum sodium, total protein, albumin, or transaminases (Table ). Fifteen subjects (45% of cohort) had paired pre- and post-angioplasty ultrasound elastography to reassess median liver stiffness at an average of 12.1 ± 8.9 months post-angioplasty. Baseline liver stiffness of 12.0 (9.4, 15.2) kPa showed a statistically insignificant downtrend to 10.8 kPa (9.0, 12.8) post-stent ( p = 0.13). Only eight subjects had spleen size recorded on paired pre- and post- ultrasound reports without a significant change ( p = 0.40). No post-angioplasty liver biopsies available. There were no hospitalizations post-angioplasty for acute liver injury or decompensated liver failure in chart review. Among 16 subjects with matching pre- and post-angioplasty BNP’s, the median baseline value 33.5 (15.2, 78.6) significantly increased to 41.0 (0, 147.8) ( p = 0.02). There were no significant changes in echocardiographic ventricular function ( p = 1.00, n = 21 subjects) or hemodynamics via repeat catheterization ( n = 9 subjects, Wilcoxon signed rank test comparisons of median pre- and post-angioplasty pressures all > 0.05) (Table ). Seven subjects underwent repeat CTA heart on average 1.6 years post-intervention to reassess cardiovascular anatomy and stent patency without any complications identified with the stent ( Fig. ). Within the retrospective review period, one subject died of decompensated heart failure 6.5 months post-angioplasty, and two underwent orthotopic heart transplant between 9 and 11 months post-angioplasty. Review of our single-center experience shows that Fontan stent angioplasty as treatment of conduit stenosis is safe, similar to others’ reports. Stent angioplasty did not result in improved non-invasive FALD markers at mid-term follow-up in this diverse patient group. In fact, our results suggest continued evidence of worsening liver health with a near-significant trend of increased post-angioplasty MELD-XI scores (most likely due to significantly increased total bilirubin on lab reassessments). There was not a similar trend of worsening labs thought to reflect clinically significant portal hypertension or liver dysfunction. Mean liver stiffness did not significantly change over 1-year post-angioplasty, and no patients had primary hospitalizations for decompensated liver failure or sequelae of cirrhosis. Among subjects with reassessment via cardiac catheterization, stent angioplasty did not result in consistent changes in Fontan or intra-cardiac hemodynamics. Significant Fontan conduit pathway obstruction is an increasingly recognized late-complication in single ventricle patients, often occult in symptomatology and first identified at routine catheterization . This issue is intuitively problematic for success in this unique circulation, and treatment via percutaneous transcatheter stenting of the obstruction has shown promising improvements in exercise capacity and NYHA class, cyanosis, ascites, pulmonary hypertension, and protein-losing enteropathy . These improvements presumably decrease risk of Fontan circulation-related complications, reduce or delay hospitalizations, and cost to healthcare system. Similarly intuitive is the premise that conduit obstruction may significantly contribute to FALD with increased flow resistance, compromised pathway efficiency, and greater downstream hepatic congestion. However, it has not been clearly demonstrated that treatment of pathway obstruction similarly treats FALD. This is the first study to specifically examine whether there are any changes in non-invasive FALD markers post-stent angioplasty, not identifying any clear sign of improvement. The increased serum bilirubin noted post-angioplasty in this small sample is unclear, though may be explained by other causes of hepatic insults imposed by Fontan circulation and univentricular dysfunction. The study sample was heterogenous in baseline cardiac function, with many subjects exhibiting objective markers of Fontan and/or cardiac failure prior to stent angioplasty. Nearly a quarter of the cohort had baseline moderate or severely reduced ventricular function ( n = 8, 24%), at least moderate AVVR ( n = 7, 21%), and resting Fontan pressure ≥ 18 mmHg ( n = 8, 24%). Nearly half had reduced cardiac output < 2.5 ml/kg/m2 at baseline catheterization ( n = 15, 45%). With two subjects undergoing cardiac transplantation and one mortality within the study period, a portion of the baseline group was quite sick. Subjects with these or similar risk factors likely had closer clinical monitoring and were more represented in the sample with available post-angioplasty comparison outcomes. Relief of conduit obstruction would not be expected to completely reverse these other contributors to liver health, and hence confound these non-invasive FALD markers. A future study to specifically examine outcomes in a subject sample with similar (healthier) baseline univentricular health would be an important follow-up to this work. Similarly, a statistically significant increase in post-angioplasty BNP was found, with reassessment available in less than half of the cohort. These subjects with available reassessments may have represented a higher-risk group with more frequent clinical monitoring due to other suboptimal circulatory factors. Additionally, the absolute median BNP values (32 pre-stent to 41 post-stent) are still quite low and unlikely reflect clinical relevance, particularly in absence of similar echocardiographic or invasive hemodynamic changes. Overall, our results reiterate that Fontan stent angioplasty is safe and effective to treat conduit stenosis, though suggest that clinicians may not expect to see an improvement in these non-invasive FALD markers at mid-term follow-up. There is a wide spectrum of “non-invasive FALD surveillance” markers with practice divergence center-to-center. There may be optimism that our center’s sample did not demonstrate statistically significant progression in these markers of FALD over 1–2 years post-angioplasty. It remains unknown whether the intervention may temporize FALD progression compared to longstanding untreated conduit obstruction. A study utilizing matched controls diagnosed with Fontan conduit stenosis with and without stent angioplasty could best answer this question, particularly if excluding subjects with significant baseline Fontan and univentricular dysfunction to avoid confounders. However, this poses ethical challenges given other the studied benefits seen following obstruction treatment. Limitations There were several limitations in this study. Study design was retrospective and observational among a single-center with a small-number cohort. Additionally, there were not uniform paired pre- and post-intervention data among subjects, resulting in a smaller subject group for outcomes analysis post-angioplasty. There was no control group to determine whether stent angioplasty may slow progression of FALD markers compared to patients with untreated conduit obstruction. A significant portion of the cohort had markers of significant ventricular dysfunction, low cardiac output, AVVR, and high Fontan pressures. This study utilized the MELD-XI and liver stiffness measured via ultrasound elastography as the primary surrogates for liver health, though non-invasive markers do vary center-to-center without an accepted gold standard. These markers of hepatic health are additionally influenced by the aforementioned cardiac hemodynamics. Other modalities such as magnetic resonance elastography may prove to be better at assessing FALD. There were several limitations in this study. Study design was retrospective and observational among a single-center with a small-number cohort. Additionally, there were not uniform paired pre- and post-intervention data among subjects, resulting in a smaller subject group for outcomes analysis post-angioplasty. There was no control group to determine whether stent angioplasty may slow progression of FALD markers compared to patients with untreated conduit obstruction. A significant portion of the cohort had markers of significant ventricular dysfunction, low cardiac output, AVVR, and high Fontan pressures. This study utilized the MELD-XI and liver stiffness measured via ultrasound elastography as the primary surrogates for liver health, though non-invasive markers do vary center-to-center without an accepted gold standard. These markers of hepatic health are additionally influenced by the aforementioned cardiac hemodynamics. Other modalities such as magnetic resonance elastography may prove to be better at assessing FALD. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 14 KB)
Physical literacy among chinese elementary school students: the mediating role of physical knowledge and physical competency
243ccd13-728a-494d-baf5-60c20d5dcd37
11773747
Health Literacy[mh]
Physical literacy (PL) is theorized as the foundation of lifetime physical activity participation and is defined as the motivation, confidence, physical competency, knowledge and understanding to value and take responsibility for engagement in physical activities for life . During the elementary school stage, children are in a critical period of growth and development, and their physical and mental states are highly malleable . Physical activity plays an important role in promoting their growth, development, and overall physical fitness . Sufficient physical activity can effectively prevent and alleviate common diseases such as childhood obesity, type 2 diabetes, metabolic syndrome, and cardiovascular diseases . Moreover, the student period is also a good stage for acquiring knowledge, learning skills, and forming habits, and the good habits cultivated in childhood can offer lifelong benefit . Increased physical literacy has been associated with engagement in physical activity throughout one’s life . Moreover, physical activity has been linked to an individual’s physical literacy . Physical literacy is a necessary precursor to physical activity, but it is also developed through physical activity . However, owing to the rise of electronic devices and the increase in academic pressure , a decrease in physical activity and an increase in sedentary behavior, which have also led to adolescent obesity and declining physical fitness, continue to plague schools and parents . To explore the internal causes of insufficient physical activity among elementary school students, factors related to health literacy have been continuously discovered and considered . Physical literacy offers a holistic approach that integrates the promotion of physical activity into the health care setting, accounting for the environment, psychosocial factors, individual abilities, and knowledge. The concept of physical literacy was first proposed by Whitehead in response to concerns about the lack of physical education in schools and students’ physical activity and later revised and improved to emphasize the philosophical foundation of philosophical monism, phenomenology and existentialism in body literacy . Ultimately, the core components of physical literacy are identified as the affective dimension (motivation and confidence), the physical dimension (physical competency), and the cognitive dimension (knowledge and understanding). The concept of physical literacy has been defined as “the motivation, confidence, physical competency, knowledge and understanding to maintain physical activity throughout the life course” . Students with sufficient motivation and knowledge were more likely to have greater incentive to participate in physical activities . Physical competency has been defined as the ability to move and reflect an individual’s motor abilities, motor skill abilities, physical capabilities, and purposeful physical pursuits. The cognitive dimension reflects students’ knowledge of physical fitness and the maintenance of the purposeful pursuit of physical activity. It is believed that PL may support health-related behavior, specifically physical activity, from a holistic perspective that regards the human being as a unity of body and mind or as the result of collected experiences in the world, which in turn form the basis for one’s own process of perception . When an individual has lower PL, their ability to adopt a healthy lifestyle decreases, whereas their risk of engaging in unhealthy behaviors increases . In response to this issue, the State Council of China issued the “National Fitness Plan (2016–2020)“ , which clearly includes students’ physical fitness levels in the performance assessment system. Additionally, the State Council General Office released “Opinions on Strengthening School Physical Education to Promote Students’ Physical and Mental Health and Comprehensive Development“ , which lists “comprehensively improving students’ physical and health literacy” as one of the basic principles of school physical education. As evidenced by these publications, schools attach great importance to students’ health literacy levels with the aim of strengthening physical activity teaching and fostering healthy exercise habits among young people. However, owing to the complexities of the Chinese context and the lack of a standardized definition for PL , a unified measurement scale for assessing PL among Chinese elementary school students is still unavailable. Consequently, existing research has predominantly focused on reporting the current status of health literacy among middle and high school students . To address this gap, our study independently developed a comprehensive questionnaire specifically tailored to measure PL in elementary school students, considering their distinct characteristics. The primary objective of this study is to conduct a thorough assessment of the prevailing state of PL among Chinese elementary school students in terms of four critical dimensions: physical knowledge, physical competency, physical motivation, and physical participation. This research endeavors to align with the national policy aimed at enhancing the overall health literacy and physical literacy of the entire population , thereby propelling the realization of a healthier China. Moreover, research both domestically and internationally on the internal interrelations of health literacy is scarce, and discussions on how to enhance internal factors to promote physical activity are lacking . This study aims to explore the internal motivation that drives elementary school students to participate in physical activity and to explore the reasons behind their engagement in such activities. It seeks to unveil the association between intrinsic physical motivation and the implementation of physical participation. In addition, the article proposes strategies to enhance elementary school students’ participation in real-life physical activity by augmenting and improving aspects of their intrinsic motivation. Research questions What is the current status of physical literacy among Chinese elementary school students by region and by grade level? Do correlations exist among the four internal dimensions of physical literacy in Chinese elementary school students? If such correlations exist, what are the principal mediating factors involved? Specifically, is it feasible to enhance the conversion from intrinsic physical motivation to extrinsic physical participation through the augmentation of both physical knowledge and skills? What is the current status of physical literacy among Chinese elementary school students by region and by grade level? Do correlations exist among the four internal dimensions of physical literacy in Chinese elementary school students? If such correlations exist, what are the principal mediating factors involved? Specifically, is it feasible to enhance the conversion from intrinsic physical motivation to extrinsic physical participation through the augmentation of both physical knowledge and skills? Participants This study was conducted from June to July 2022 and used a multistage cluster sampling method to select the study subjects. In the first step, one province each was selected from the eastern, central, and western regions of China: Hebei Province, Sichuan Province, and Qinghai Province, respectively. Additionally, the municipality of Shenzhen in southern China was selected. In the second step, one city was randomly chosen from each province: Cangzhou, Luzhou, and Haidong, respectively. In the third step, one elementary school was selected from each city for the study. In the fourth step, 1–2 classes were randomly selected from grades 1–2, 3–4, and 5–6 in the chosen schools, and a questionnaire survey was conducted among all the students in the selected classes. The four selected cities range in economic development level, with the highest level being in Shenzhen and the lowest in Haidong. We ensured an even distribution of students’ provinces across all grade bands (Appendix Table ). The inclusion criteria were being a student in the selected classes and informed consent from all the participants. The exclusion criterion was unwillingness to participate in the study. The questionnaire was self-administered. A total of 3,275 questionnaires were distributed in this study, and ultimately, 3,091 valid questionnaires were collected (allowing for a maximum missing data rate of 20%), resulting in an effective response rate of 94.38%. The questionnaires were distributed as follows: 1,015 in grades 1–2, 1,013 in grades 3–4, and 1,063 in grades 5–6. The average age was 9.04 ± 1.60 years. All participants provided informed consent. This study obtained ethical approval from the Institutional Review Board of the Chinese Academy of Medical Sciences & Peking Union Medical College, which oversees research involving human subjects. Physical literacy assessment scale for Chinese 6–14-year-old children Owing to variations in cognitive levels and comprehension abilities among children of different age groups, this study developed three versions of the scale: PL-Grades 12 (Physical Literacy Scale for grades 1–2), PL-Grades 34 (Physical Literacy Scale for grades 3–4), and PL-Grades 56 (Physical Literacy Scale for grades 5-–6) (specific items of the three scales are provided in Tables , and ; Fig. in Appendix ). All three scales shared the same composition, consisting of a personal information section and a measurement section for physical literacy. The personal information section comprised eight questions used to gather students’ name, gender, age, grade, ethnicity, myopia status, physical activity status, and internet usage. In the measurement section for physical literacy, four dimensions were defined on the basis of existing Chinese policy documents and guidelines for children’s health literacy: physical knowledge, physical motivation, physical activity competency, and physical participation. The scale had a maximum score of 100 points. Prior to the assessment, weights were assigned to each dimension on the basis of the importance ratings provided by Delphi experts, and scores were allocated accordingly. (The scores for each dimension are presented in Appendix Table .) The PL measures explained 53.1%, 50.3%, and 54.7% of variance, respectively, all of which exceed the 50% threshold. The unidimensionality assumption remained valid, and internal consistency proved robust, facilitating effective differentiation among students with varying levels of proficiency. All items in the three questionnaires underwent validity and reliability testing using the Rasch model of item response theory. The item reliability coefficient for all questionnaires was 1, while the item separation indices were 14.23, 15.41, and 21.25, all exceeding 3, indicating good internal consistency. The unidimensionality assumption held true for all three questionnaires. For most items in the questionnaires, the average infit MNSQ and outfit MNSQ values ranged between 0.5 and 1.5, suggesting a strong overall fit. Physical knowledge—independent variable The assessment of physical activity-related knowledge pertains to the ability of Chinese elementary school students to acquire physical knowledge related to physical activity and their understanding and mastery of such knowledge. On the basis of the “Core Information and Interpretation of Health Education for Chinese Adolescents (2018 Edition)” and the “Physical Activity Guidelines for Children and Adolescents in China” , a knowledge item bank corresponding to three grade levels was constructed. These items primarily assessed children’s understanding of health and physical activity, recommendations regarding sedentary behavior, and awareness of safety during physical activities. The questions for this dimension were presented in a true/false format. Physical motivation—mediator variable 1 The Children’s Self-Perception of Adequacy and Preference for Physical Activity (CSAPPA) scale involves children’s perceptions of enjoyment, competency, and desire to engage in physical activity (PA) . The CSAPPA assesses children’s self-evaluations of their preferences for PA and their self-perceived ability to meet certain acceptable success criteria in PA. All the items in this section were adapted from the CSAPPA and were presented in the form of true/false questions for grades 1–4. Items for grades 5–6 were ranked on a five-level Likert scale with the following options: “strongly agree, agree, average, disagree, strongly disagree”. Physical competency—mediator variable 2 The domain of physical activity ability primarily assesses the mastery of various levels of physical ability, presented in the form of multiple-choice questions. In this section, the selected physical activities were divided into recreational activities (such as cycling, dancing, swimming, roller skating, taekwondo, martial arts, and skiing) and nonrecreational activities (such as running, gymnastics, jumping rope, shuttlecock kicking, table tennis, badminton, hiking, and sit-ups/pull-ups). According to the “Physical Education and Health Curriculum Standards for Compulsory Education (2022)” and references from the literature , roller skating, taekwondo, and skiing are classified as emerging sports. Physical participation—dependent variable The domain of physical participation includes the weekly duration and frequency of exercise. For grades 1–6, exercise frequency was assessed via the following question: “How many times have you done this exercise in the past 7 days?” The options were presented on the following four-point scale: “No Attendance, 2 Visits, 5 Visits, Every Day”. In grades 5–6, a question about the frequency of exercise was added to ask about the duration of each exercise, with the responses on a four-point scale: “≤0.5 hours, 0.75 hours, 1.5 hours, 2 hours”. The scoring criteria for the items were established mainly on the basis of the results of Rasch analysis. Rasch analysis provides difficulty values for each item within a given dimension and ranks them accordingly. Owing to the presence of negative difficulty values, a standardization method is employed by shifting the difficulty values to the right by four decimal places, where a higher numerical value indicates greater item difficulty. The score z(i) for each individual item within a dimension is assigned on the basis of the total score y(i) and the item difficulty x(i). A correct response yields a score of z(i), whereas an incorrect response does not yield any points. The scoring for the physical participation dimension of grades 1–6, which was based on the four-level Likert scale, was computed using the abovementioned formula. A score was subsequently derived for engaging in physical participation on all seven days of the week, and further scores were then calculated on the basis of the number of days of physical participation for each option. Similarly, the scoring for frequency of physical activity in grades 5–6, which used a four-level Likert scale, was determined through the application of the abovementioned formula to obtain a score for engaging in physical activity for 2 h, followed by the calculation of scores corresponding to the duration of physical activity represented by each option. Furthermore, the physical motivation dimension for grades 5–6, which employed a five-level Likert scale, was scored via the provided formula to derive scores for the “strongly agree” option, with the “agree” option being assigned two-thirds of that score and the “neutral” option being assigned one-third of that score, whereas individuals selecting “disagree” or “strongly disagree” did not receive a score. Detailed scores for each item can be found in Appendix . Statistical analysis The questionnaire item data were analyzed for their fit to the Rasch model using Winsteps (version 3.66.0; https://www.winsteps.com/index.htm ). Statistical analyses and ROC curve plotting were conducted using SPSS 24.0. Categorical variables are described as proportions, whereas continuous variables are presented as mean ± SD. The qualification rate for physical literacy was determined on the basis of the optimal cutoff value of the receiver operating characteristic (ROC) curve. Additionally, achieving a score equivalent to 60% of the total score in each of the four dimensions was deemed as meeting the qualification criteria for that specific dimension. When the participant’s score was greater than the cutoff value obtained from the ROC curve, they were deemed qualified; otherwise, they were deemed unqualified. Logistic regression models adjusted for sex and age were used to assess the relationship between physical knowledge and physical participation. For continuous bivariate variables, t tests were used in single-factor analysis, whereas for continuous multivariate variables, one-way ANOVA was employed in single-factor analysis. Post hoc pairwise comparisons were conducted using the LSD method. The internal correlations between physical activity and health literacy were examined using Pearson correlation tests. Finally, variables associated with both physical knowledge and physical participation were included in a mediation model that sought to identify and explain the mechanism underlying the observed relationship between the independent variable (physical motivation) and the dependent variable (physical participation) via the inclusion of third hypothetical variables (physical knowledge, physical competency), known as mediators. This model also allowed us to quantify the proportion mediated with respect to the total effect of physical motivation on physical participation. Different paths were created in this model: Path a, representing the effect of physical motivation on mediators; Path b, representing the effect of mediators on physical participation; Path a*b (known as the indirect effect), which represents the effect of physical knowledge or physical competency on physical participation by the mediators; Path c, representing the total effect of physical motivation on physical participation; and Path c’ (known as the direct effect), which represents the effect of physical motivation on physical participation after controlling for mediators and other covariates. Mediation analysis was conducted using R 4.0.5. The bootstrap sampling method in the product coefficient method was utilized for mediation analysis, with 5000 resampling iterations. A significance level of P < 0.05 was established for all hypothesis tests. This study was conducted from June to July 2022 and used a multistage cluster sampling method to select the study subjects. In the first step, one province each was selected from the eastern, central, and western regions of China: Hebei Province, Sichuan Province, and Qinghai Province, respectively. Additionally, the municipality of Shenzhen in southern China was selected. In the second step, one city was randomly chosen from each province: Cangzhou, Luzhou, and Haidong, respectively. In the third step, one elementary school was selected from each city for the study. In the fourth step, 1–2 classes were randomly selected from grades 1–2, 3–4, and 5–6 in the chosen schools, and a questionnaire survey was conducted among all the students in the selected classes. The four selected cities range in economic development level, with the highest level being in Shenzhen and the lowest in Haidong. We ensured an even distribution of students’ provinces across all grade bands (Appendix Table ). The inclusion criteria were being a student in the selected classes and informed consent from all the participants. The exclusion criterion was unwillingness to participate in the study. The questionnaire was self-administered. A total of 3,275 questionnaires were distributed in this study, and ultimately, 3,091 valid questionnaires were collected (allowing for a maximum missing data rate of 20%), resulting in an effective response rate of 94.38%. The questionnaires were distributed as follows: 1,015 in grades 1–2, 1,013 in grades 3–4, and 1,063 in grades 5–6. The average age was 9.04 ± 1.60 years. All participants provided informed consent. This study obtained ethical approval from the Institutional Review Board of the Chinese Academy of Medical Sciences & Peking Union Medical College, which oversees research involving human subjects. Owing to variations in cognitive levels and comprehension abilities among children of different age groups, this study developed three versions of the scale: PL-Grades 12 (Physical Literacy Scale for grades 1–2), PL-Grades 34 (Physical Literacy Scale for grades 3–4), and PL-Grades 56 (Physical Literacy Scale for grades 5-–6) (specific items of the three scales are provided in Tables , and ; Fig. in Appendix ). All three scales shared the same composition, consisting of a personal information section and a measurement section for physical literacy. The personal information section comprised eight questions used to gather students’ name, gender, age, grade, ethnicity, myopia status, physical activity status, and internet usage. In the measurement section for physical literacy, four dimensions were defined on the basis of existing Chinese policy documents and guidelines for children’s health literacy: physical knowledge, physical motivation, physical activity competency, and physical participation. The scale had a maximum score of 100 points. Prior to the assessment, weights were assigned to each dimension on the basis of the importance ratings provided by Delphi experts, and scores were allocated accordingly. (The scores for each dimension are presented in Appendix Table .) The PL measures explained 53.1%, 50.3%, and 54.7% of variance, respectively, all of which exceed the 50% threshold. The unidimensionality assumption remained valid, and internal consistency proved robust, facilitating effective differentiation among students with varying levels of proficiency. All items in the three questionnaires underwent validity and reliability testing using the Rasch model of item response theory. The item reliability coefficient for all questionnaires was 1, while the item separation indices were 14.23, 15.41, and 21.25, all exceeding 3, indicating good internal consistency. The unidimensionality assumption held true for all three questionnaires. For most items in the questionnaires, the average infit MNSQ and outfit MNSQ values ranged between 0.5 and 1.5, suggesting a strong overall fit. Physical knowledge—independent variable The assessment of physical activity-related knowledge pertains to the ability of Chinese elementary school students to acquire physical knowledge related to physical activity and their understanding and mastery of such knowledge. On the basis of the “Core Information and Interpretation of Health Education for Chinese Adolescents (2018 Edition)” and the “Physical Activity Guidelines for Children and Adolescents in China” , a knowledge item bank corresponding to three grade levels was constructed. These items primarily assessed children’s understanding of health and physical activity, recommendations regarding sedentary behavior, and awareness of safety during physical activities. The questions for this dimension were presented in a true/false format. Physical motivation—mediator variable 1 The Children’s Self-Perception of Adequacy and Preference for Physical Activity (CSAPPA) scale involves children’s perceptions of enjoyment, competency, and desire to engage in physical activity (PA) . The CSAPPA assesses children’s self-evaluations of their preferences for PA and their self-perceived ability to meet certain acceptable success criteria in PA. All the items in this section were adapted from the CSAPPA and were presented in the form of true/false questions for grades 1–4. Items for grades 5–6 were ranked on a five-level Likert scale with the following options: “strongly agree, agree, average, disagree, strongly disagree”. Physical competency—mediator variable 2 The domain of physical activity ability primarily assesses the mastery of various levels of physical ability, presented in the form of multiple-choice questions. In this section, the selected physical activities were divided into recreational activities (such as cycling, dancing, swimming, roller skating, taekwondo, martial arts, and skiing) and nonrecreational activities (such as running, gymnastics, jumping rope, shuttlecock kicking, table tennis, badminton, hiking, and sit-ups/pull-ups). According to the “Physical Education and Health Curriculum Standards for Compulsory Education (2022)” and references from the literature , roller skating, taekwondo, and skiing are classified as emerging sports. Physical participation—dependent variable The domain of physical participation includes the weekly duration and frequency of exercise. For grades 1–6, exercise frequency was assessed via the following question: “How many times have you done this exercise in the past 7 days?” The options were presented on the following four-point scale: “No Attendance, 2 Visits, 5 Visits, Every Day”. In grades 5–6, a question about the frequency of exercise was added to ask about the duration of each exercise, with the responses on a four-point scale: “≤0.5 hours, 0.75 hours, 1.5 hours, 2 hours”. The scoring criteria for the items were established mainly on the basis of the results of Rasch analysis. Rasch analysis provides difficulty values for each item within a given dimension and ranks them accordingly. Owing to the presence of negative difficulty values, a standardization method is employed by shifting the difficulty values to the right by four decimal places, where a higher numerical value indicates greater item difficulty. The score z(i) for each individual item within a dimension is assigned on the basis of the total score y(i) and the item difficulty x(i). A correct response yields a score of z(i), whereas an incorrect response does not yield any points. The scoring for the physical participation dimension of grades 1–6, which was based on the four-level Likert scale, was computed using the abovementioned formula. A score was subsequently derived for engaging in physical participation on all seven days of the week, and further scores were then calculated on the basis of the number of days of physical participation for each option. Similarly, the scoring for frequency of physical activity in grades 5–6, which used a four-level Likert scale, was determined through the application of the abovementioned formula to obtain a score for engaging in physical activity for 2 h, followed by the calculation of scores corresponding to the duration of physical activity represented by each option. Furthermore, the physical motivation dimension for grades 5–6, which employed a five-level Likert scale, was scored via the provided formula to derive scores for the “strongly agree” option, with the “agree” option being assigned two-thirds of that score and the “neutral” option being assigned one-third of that score, whereas individuals selecting “disagree” or “strongly disagree” did not receive a score. Detailed scores for each item can be found in Appendix . The assessment of physical activity-related knowledge pertains to the ability of Chinese elementary school students to acquire physical knowledge related to physical activity and their understanding and mastery of such knowledge. On the basis of the “Core Information and Interpretation of Health Education for Chinese Adolescents (2018 Edition)” and the “Physical Activity Guidelines for Children and Adolescents in China” , a knowledge item bank corresponding to three grade levels was constructed. These items primarily assessed children’s understanding of health and physical activity, recommendations regarding sedentary behavior, and awareness of safety during physical activities. The questions for this dimension were presented in a true/false format. The Children’s Self-Perception of Adequacy and Preference for Physical Activity (CSAPPA) scale involves children’s perceptions of enjoyment, competency, and desire to engage in physical activity (PA) . The CSAPPA assesses children’s self-evaluations of their preferences for PA and their self-perceived ability to meet certain acceptable success criteria in PA. All the items in this section were adapted from the CSAPPA and were presented in the form of true/false questions for grades 1–4. Items for grades 5–6 were ranked on a five-level Likert scale with the following options: “strongly agree, agree, average, disagree, strongly disagree”. The domain of physical activity ability primarily assesses the mastery of various levels of physical ability, presented in the form of multiple-choice questions. In this section, the selected physical activities were divided into recreational activities (such as cycling, dancing, swimming, roller skating, taekwondo, martial arts, and skiing) and nonrecreational activities (such as running, gymnastics, jumping rope, shuttlecock kicking, table tennis, badminton, hiking, and sit-ups/pull-ups). According to the “Physical Education and Health Curriculum Standards for Compulsory Education (2022)” and references from the literature , roller skating, taekwondo, and skiing are classified as emerging sports. The domain of physical participation includes the weekly duration and frequency of exercise. For grades 1–6, exercise frequency was assessed via the following question: “How many times have you done this exercise in the past 7 days?” The options were presented on the following four-point scale: “No Attendance, 2 Visits, 5 Visits, Every Day”. In grades 5–6, a question about the frequency of exercise was added to ask about the duration of each exercise, with the responses on a four-point scale: “≤0.5 hours, 0.75 hours, 1.5 hours, 2 hours”. The scoring criteria for the items were established mainly on the basis of the results of Rasch analysis. Rasch analysis provides difficulty values for each item within a given dimension and ranks them accordingly. Owing to the presence of negative difficulty values, a standardization method is employed by shifting the difficulty values to the right by four decimal places, where a higher numerical value indicates greater item difficulty. The score z(i) for each individual item within a dimension is assigned on the basis of the total score y(i) and the item difficulty x(i). A correct response yields a score of z(i), whereas an incorrect response does not yield any points. The scoring for the physical participation dimension of grades 1–6, which was based on the four-level Likert scale, was computed using the abovementioned formula. A score was subsequently derived for engaging in physical participation on all seven days of the week, and further scores were then calculated on the basis of the number of days of physical participation for each option. Similarly, the scoring for frequency of physical activity in grades 5–6, which used a four-level Likert scale, was determined through the application of the abovementioned formula to obtain a score for engaging in physical activity for 2 h, followed by the calculation of scores corresponding to the duration of physical activity represented by each option. Furthermore, the physical motivation dimension for grades 5–6, which employed a five-level Likert scale, was scored via the provided formula to derive scores for the “strongly agree” option, with the “agree” option being assigned two-thirds of that score and the “neutral” option being assigned one-third of that score, whereas individuals selecting “disagree” or “strongly disagree” did not receive a score. Detailed scores for each item can be found in Appendix . The questionnaire item data were analyzed for their fit to the Rasch model using Winsteps (version 3.66.0; https://www.winsteps.com/index.htm ). Statistical analyses and ROC curve plotting were conducted using SPSS 24.0. Categorical variables are described as proportions, whereas continuous variables are presented as mean ± SD. The qualification rate for physical literacy was determined on the basis of the optimal cutoff value of the receiver operating characteristic (ROC) curve. Additionally, achieving a score equivalent to 60% of the total score in each of the four dimensions was deemed as meeting the qualification criteria for that specific dimension. When the participant’s score was greater than the cutoff value obtained from the ROC curve, they were deemed qualified; otherwise, they were deemed unqualified. Logistic regression models adjusted for sex and age were used to assess the relationship between physical knowledge and physical participation. For continuous bivariate variables, t tests were used in single-factor analysis, whereas for continuous multivariate variables, one-way ANOVA was employed in single-factor analysis. Post hoc pairwise comparisons were conducted using the LSD method. The internal correlations between physical activity and health literacy were examined using Pearson correlation tests. Finally, variables associated with both physical knowledge and physical participation were included in a mediation model that sought to identify and explain the mechanism underlying the observed relationship between the independent variable (physical motivation) and the dependent variable (physical participation) via the inclusion of third hypothetical variables (physical knowledge, physical competency), known as mediators. This model also allowed us to quantify the proportion mediated with respect to the total effect of physical motivation on physical participation. Different paths were created in this model: Path a, representing the effect of physical motivation on mediators; Path b, representing the effect of mediators on physical participation; Path a*b (known as the indirect effect), which represents the effect of physical knowledge or physical competency on physical participation by the mediators; Path c, representing the total effect of physical motivation on physical participation; and Path c’ (known as the direct effect), which represents the effect of physical motivation on physical participation after controlling for mediators and other covariates. Mediation analysis was conducted using R 4.0.5. The bootstrap sampling method in the product coefficient method was utilized for mediation analysis, with 5000 resampling iterations. A significance level of P < 0.05 was established for all hypothesis tests. Demographic characteristics Among the 3,091 elementary school students surveyed, 1,499 (49.72%) were boys, and 1,516 (50.28%) were girls. Regarding grade levels, grades 1–2 accounted for 1,015 students (32.84%), grades 3–4 accounted for 1,013 students (32.77%), and grades 5–6 accounted for 1,063 students (34.39%). Among the provinces, the Eastern region had 615 students (19.90%), the Central region had 656 students (21.22%), the Western region had 1,102 students (35.65%), and the Southern region had 718 students (23.23%) (Table ). Physical literacy score and qualification rate The average PL score of the elementary school students was 63.46 (SD 12.01). There were variations in overall physical literacy scores across different grade levels and provinces. The PL scores for grades 1–2 were 66.79 (SD 12.87), for grades 3–4 were 60.89 (SD 11.87), and for grades 5–6 were 62.72 (SD 10.48). Notably, the students in higher grades had lower health literacy scores. The overall average health literacy score for elementary school students in the eastern provinces was the lowest, at 58.82 (SD 13.58). In the central provinces, the average health literacy score was 63.11 (SD 12.00), whereas in the western provinces, it was 64.64 (SD 10.79). The highest overall health literacy score was reported for elementary school students from southern urban areas, with an average score of 65.94 (SD 11.26) (Table ). For grades 1–2, the average score for physical knowledge was 22.76 (SD 2.52); for grades 3–4, it was 22.52 (SD 3.28); and for grades 5–6, it was 21.85 (SD 3.78). The average score for physical motivation in grades 1–2 was 19.34 (SD 3.61), that in grades 3–4 was 14.06 (SD 4.83), and that in Grades 5–6 was 15.76 (SD 4.55). The average scores for physical competency were 13.80 (SD 5.61) for grades 1–2, 15.95 (SD 5.70) for grades 3–4, and 16.71 (SD 5.23) for grades 5–6. The average scores for physical participation were 10.89 (SD 6.23) for grades 1–2, 8.36 (SD 5.02) for grades 3–4, and 8.41 (SD 3.16) for grades 5–6 (Table ). The optimal cutoff values for grades 1–2, grades 3–4, and grades 5–6 in physical literacy, as determined by the ROC curve, were 58.05, 53.53, and 60.77 points, respectively. Similarly, the qualification rates for health literacy were 76.35%, 73.45%, and 59.17%, respectively, for the three academic levels. The proficiency rates in physical knowledge across the three academic stages were 98.13%, 96.84%, and 92.76%, respectively. In terms of physical competency, the qualification rates were 42.27%, 42.25%, and 50.89%, respectively. For physical motivation, the qualification rates were 91.03% in grades 1–2, 61.30% in grades 3–4, and 63.78% in grades 5–6. With respect to physical participation, the qualification rates were 14.48% in grades 1–2, 9.97% in grades 3–4, and 3.95% in grades 5–6. Correlations among PL dimensions The Pearson correlation analysis revealed that physical literacy was positively correlated with each of the four dimensions: physical knowledge ( r = 0.363), physical motivation ( r = 0.608), physical competency ( r = 0.716), and physical participation ( r = 0.751) ( P < 0.01). Notably, the correlation between PL and physical motivation consistently increased across the three grade levels ( r = 0.529, r = 0.596, r = 0.674, P < 0.01), whereas the correlation between PL and physical competency progressively decreased ( r = 0.826, r = 0.770, r = 0.767, P < 0.01) over the same grade levels. Furthermore, the correlation between PL and physical participation also continuously decreased ( r = 0.852, r = 0.701, r = 0.615, P < 0.01) across the three grade levels. Moroever, the Pearson correlation analysis revealed that, for the lower grades, physical participation scores were positively correlated with scores for physical knowledge ( r = 0.140), physical motivation ( r = 0.206), and physical competency ( r = 0.648) ( P < 0.01). In the middle grades, physical participation scores were positively correlated with physical motivation ( r = 0.132) and physical competency ( r = 0.480) ( P < 0.01). In the upper grades, physical participation scores were positively correlated with physical motivation ( r = 0.282) and physical competency ( r = 0.458) ( P < 0.01) but negatively correlated with physical knowledge ( r =-0.103) ( P < 0.01). As grade level increased, the correlation among physical competency, physical knowledge, and physical participation decreased. However, in grades 5–6, the correlation between physical participation and physical motivation increased (Table ). Linear regression The multiple linear regression analysis of physical participation and physical motivation revealed that different provinces, grade levels, and electronic screen usage all influenced physical participation. Specifically, a middle (β=-0.136, 95% CI -1.940~-1.004) or high (β=-0.111, 95% CI -1.683~-0.680) grade level negatively predicted physical participation. Conversely, physical motivation (β = 0.206, 95% CI 0.175 ~ 0.253), Western urban location (β = 0.127, 95% CI 0.867 ~ 1.822), and electronic screen usage (β = 0.079, 95% CI 0.380 ~ 1.221) positively predicted physical participation. Upon introducing the mediating variable of physical competency scores into the model (Model 2), both physical motivation (β = 0.090, 95% CI 0.059 ~ 0.128) and physical competency (β = 0.516, 95% CI 0.437 ~ 0.492) positively predicted physical participation. Following the introduction of the mediating variable of physical knowledge into the model (Model 3), a significant positive relationship emerged between physical participation and physical motivation (β = 0.211, 95% CI 0.179 ~ 0.259), whereas the association between physical participation and physical knowledge was not statistically significant (β=-0.035, 95% CI -0.108 ~ 0.000) (Table ). Mediation analysis The results of the bootstrap mediation analysis revealed a significant mediating effect of physical competency on the relationship between physical motivation and physical participation ( P < 0.001). The proportion of the total effect attributable to the mediating effect of physical competency was 56.40%. Conversely, the mediating effect of physical knowledge on the association between physical motivation and physical participation was not statistically significant ( P = 0.055) (Fig. & Appendix Table ). Among the 3,091 elementary school students surveyed, 1,499 (49.72%) were boys, and 1,516 (50.28%) were girls. Regarding grade levels, grades 1–2 accounted for 1,015 students (32.84%), grades 3–4 accounted for 1,013 students (32.77%), and grades 5–6 accounted for 1,063 students (34.39%). Among the provinces, the Eastern region had 615 students (19.90%), the Central region had 656 students (21.22%), the Western region had 1,102 students (35.65%), and the Southern region had 718 students (23.23%) (Table ). The average PL score of the elementary school students was 63.46 (SD 12.01). There were variations in overall physical literacy scores across different grade levels and provinces. The PL scores for grades 1–2 were 66.79 (SD 12.87), for grades 3–4 were 60.89 (SD 11.87), and for grades 5–6 were 62.72 (SD 10.48). Notably, the students in higher grades had lower health literacy scores. The overall average health literacy score for elementary school students in the eastern provinces was the lowest, at 58.82 (SD 13.58). In the central provinces, the average health literacy score was 63.11 (SD 12.00), whereas in the western provinces, it was 64.64 (SD 10.79). The highest overall health literacy score was reported for elementary school students from southern urban areas, with an average score of 65.94 (SD 11.26) (Table ). For grades 1–2, the average score for physical knowledge was 22.76 (SD 2.52); for grades 3–4, it was 22.52 (SD 3.28); and for grades 5–6, it was 21.85 (SD 3.78). The average score for physical motivation in grades 1–2 was 19.34 (SD 3.61), that in grades 3–4 was 14.06 (SD 4.83), and that in Grades 5–6 was 15.76 (SD 4.55). The average scores for physical competency were 13.80 (SD 5.61) for grades 1–2, 15.95 (SD 5.70) for grades 3–4, and 16.71 (SD 5.23) for grades 5–6. The average scores for physical participation were 10.89 (SD 6.23) for grades 1–2, 8.36 (SD 5.02) for grades 3–4, and 8.41 (SD 3.16) for grades 5–6 (Table ). The optimal cutoff values for grades 1–2, grades 3–4, and grades 5–6 in physical literacy, as determined by the ROC curve, were 58.05, 53.53, and 60.77 points, respectively. Similarly, the qualification rates for health literacy were 76.35%, 73.45%, and 59.17%, respectively, for the three academic levels. The proficiency rates in physical knowledge across the three academic stages were 98.13%, 96.84%, and 92.76%, respectively. In terms of physical competency, the qualification rates were 42.27%, 42.25%, and 50.89%, respectively. For physical motivation, the qualification rates were 91.03% in grades 1–2, 61.30% in grades 3–4, and 63.78% in grades 5–6. With respect to physical participation, the qualification rates were 14.48% in grades 1–2, 9.97% in grades 3–4, and 3.95% in grades 5–6. The Pearson correlation analysis revealed that physical literacy was positively correlated with each of the four dimensions: physical knowledge ( r = 0.363), physical motivation ( r = 0.608), physical competency ( r = 0.716), and physical participation ( r = 0.751) ( P < 0.01). Notably, the correlation between PL and physical motivation consistently increased across the three grade levels ( r = 0.529, r = 0.596, r = 0.674, P < 0.01), whereas the correlation between PL and physical competency progressively decreased ( r = 0.826, r = 0.770, r = 0.767, P < 0.01) over the same grade levels. Furthermore, the correlation between PL and physical participation also continuously decreased ( r = 0.852, r = 0.701, r = 0.615, P < 0.01) across the three grade levels. Moroever, the Pearson correlation analysis revealed that, for the lower grades, physical participation scores were positively correlated with scores for physical knowledge ( r = 0.140), physical motivation ( r = 0.206), and physical competency ( r = 0.648) ( P < 0.01). In the middle grades, physical participation scores were positively correlated with physical motivation ( r = 0.132) and physical competency ( r = 0.480) ( P < 0.01). In the upper grades, physical participation scores were positively correlated with physical motivation ( r = 0.282) and physical competency ( r = 0.458) ( P < 0.01) but negatively correlated with physical knowledge ( r =-0.103) ( P < 0.01). As grade level increased, the correlation among physical competency, physical knowledge, and physical participation decreased. However, in grades 5–6, the correlation between physical participation and physical motivation increased (Table ). The multiple linear regression analysis of physical participation and physical motivation revealed that different provinces, grade levels, and electronic screen usage all influenced physical participation. Specifically, a middle (β=-0.136, 95% CI -1.940~-1.004) or high (β=-0.111, 95% CI -1.683~-0.680) grade level negatively predicted physical participation. Conversely, physical motivation (β = 0.206, 95% CI 0.175 ~ 0.253), Western urban location (β = 0.127, 95% CI 0.867 ~ 1.822), and electronic screen usage (β = 0.079, 95% CI 0.380 ~ 1.221) positively predicted physical participation. Upon introducing the mediating variable of physical competency scores into the model (Model 2), both physical motivation (β = 0.090, 95% CI 0.059 ~ 0.128) and physical competency (β = 0.516, 95% CI 0.437 ~ 0.492) positively predicted physical participation. Following the introduction of the mediating variable of physical knowledge into the model (Model 3), a significant positive relationship emerged between physical participation and physical motivation (β = 0.211, 95% CI 0.179 ~ 0.259), whereas the association between physical participation and physical knowledge was not statistically significant (β=-0.035, 95% CI -0.108 ~ 0.000) (Table ). The results of the bootstrap mediation analysis revealed a significant mediating effect of physical competency on the relationship between physical motivation and physical participation ( P < 0.001). The proportion of the total effect attributable to the mediating effect of physical competency was 56.40%. Conversely, the mediating effect of physical knowledge on the association between physical motivation and physical participation was not statistically significant ( P = 0.055) (Fig. & Appendix Table ). The increased attention to the concepts of physical literacy and competence stems from the structural changes in society that have impacted the physical activity habits of children and adolescents, coupled with concerns about the escalating sedentary lifestyle . This study aims to elucidate the current state of physical literacy among Chinese elementary school students. The investigation revealed an average physical literacy score of 63.46 among elementary school students, with higher grade levels exhibiting lower physical literacy scores than their lower-grade counterparts. The health literacy pass rate in grades 5–6 was lower than those of students in grades 3–4 and grades 1–2. Research has reported a health literacy prevalence rate of 69.83% among adolescents , whereas elementary school students in Nanjing demonstrated a health literacy rate of 73.7% . In Qingdao, elementary school students reported a physical literacy rate of 76.3%, and scores decreased with increasing grade level , which aligns with the findings of the present study. The qualification rates for physical knowledge in this research report showed minimal variation and were consistently high across all grade levels. This finding suggests that schools should dedicate considerable attention to the teaching and dissemination of health knowledge in both daily learning and life. Students demonstrate a high learning capacity in acquiring health knowledge, with elementary school students reflecting commendable proficiency in the mastery of physical knowledge. However, despite the relatively uniform proficiency in knowledge acquisition among students in the Grade 1–2, 3–4 and 5–6, physical literacy demonstrated an overall decline. An analysis of the four dimensions of physical literacy across different grade levels revealed that as elementary school students progressed to higher grades, their qualification rate for physical competency increased. Nevertheless, the qualification rates for physical motivation and physical participation exhibited a noticeable decline over the same period. This phenomenon serves as a primary factor contributing to the reduction in overall health literacy qualification rates with increasing grade levels. This finding suggests that both internal and external training in sports skills for students is robust. In both emerging sports and in-school testing projects, elementary school students progressively increase the number of sports they master with age, thereby adeptly building their skills. However, higher-grade students had lower physical activity than their lower-grade counterparts, who have mastered fewer sports. This difference could be attributed, in part, to increased academic burden. As students advance through grades, the quantity of postschool assignments increases, contributing to a sense of urgency and pressure associated with academic tasks and the impending transition to higher education. Consequently, it is plausible that elementary school students may sacrifice the time originally allocated for physical activities to cope with these academic pressures . Second, against the backdrop of the rapid development of network electronic information technology, as cognitive levels rise, students continually engage with and master emerging and intriguing electronic products. The time originally designated for physical activities tends now to be allocated to electronic devices. Excessive use of electronic devices increases students’ sedentary time and exerts a negative impact on their physical health . The results of the correlation analysis revealed that the correlation between physical literacy and physical motivation consistently increased, whereas the correlations with physical competency and physical participation progressively decreased. In the future, to promote health among elementary school students, enhancing students’ physical motivation should be emphasized. Teachers, families, friends, and other stakeholders should provide external support for elementary school students. Simultaneously, the teaching process should incorporate more engaging elements, blending education with entertainment, thereby reducing students’ aversion to certain essential physical activities within the school setting. This approach aims to seek and reinforce both internal and external motivations for physical activity among growing children, encouraging students to actively choose healthy physical activities . The mediation analysis revealed the association between physical motivation and physical activity behavior, wherein the mediating effect of physical competency is significant . Conversely, the mediating effect of physical knowledge is not statistically significant. These results underscore the pivotal role played by physical competency within the internal dimensions of physical literacy, particularly between physical knowledge and physical participation. As future initiatives for physical literacy education among elementary school students unfold, increased attention to external health information will enhance motivation to engage in physical activities, leading to the acquisition of more physical activity skills . Owing to the continuous enhancement of both physical motivation and physical competency, students will willingly participate in more physical activities, creating a positive feedback loop. In light of the preceding analysis, educators and policy-makers may consider encouraging students to set personal health goals and emphasizing the crucial significance of physical activity for individual health to increase students’ intrinsic motivation. Furthermore, creating a positive and joyful school culture and home environment can foster sustained motivation to engage in physical activities . Parents and community members can create more opportunities for students to participate in sports activities by participating in school sports activities themselves and providing sports equipment or venues . The mediating effect of skills underscores the importance of competence in implementing physical activities. Therefore, schools can reinforce basic motor skills training within the curriculum, enhancing students’ proficiency in various motor skills . This training equips them with the prerequisites for engaging in physical activities, consequently elevating their levels of physical participation. In addition, schools can organize regular sports competitions and activities to provide students with opportunities to demonstrate their sports skills and simultaneously stimulate their interest and enthusiasm for sports activities . Ultimately, through these comprehensive efforts, students can be encouraged to form a lifelong habit of participating in physical activities and improve their health literacy. Limitations This study has several limitations. First, because the data were collected through self-administered paper questionnaires filled out by elementary school students, there are instances of missing data. Multiple imputation methods were subsequently employed to address some of these missing values, which could impact the accuracy to some extent. Second, as the questionnaires were self-administered by elementary school students, bias was inevitably present in the completion process, such as mutual influence among classmates and cases of less-than-serious responses. Furthermore, this study utilized cluster sampling and selected only four provinces within the national scope, which could introduce selection bias. Finally, this study was a cross-sectional study, making causal inferences difficult. This study has several limitations. First, because the data were collected through self-administered paper questionnaires filled out by elementary school students, there are instances of missing data. Multiple imputation methods were subsequently employed to address some of these missing values, which could impact the accuracy to some extent. Second, as the questionnaires were self-administered by elementary school students, bias was inevitably present in the completion process, such as mutual influence among classmates and cases of less-than-serious responses. Furthermore, this study utilized cluster sampling and selected only four provinces within the national scope, which could introduce selection bias. Finally, this study was a cross-sectional study, making causal inferences difficult. With the progression of grade level, there was a yearly improvement in physical competency. However, due to the pronounced decrease in physical motivation among students in the Grade 3–4 and 5–6 the prevalence of health literacy tended to decrease annually. Considering the mediating effect of physical competency, it is evident that educators and policy-makers in schools and the government can focus future health promotion efforts on enhancing students’ motivation to engage in physical actions. This approach involves fostering voluntary learning of health knowledge and reinforcing students’ training in motor skills, thereby increasing the transformation of elementary school students’ physical motivation into physical participation. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4